Sample records for deterministic finite automaton

  1. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA

    PubMed Central

    Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114

  2. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA.

    PubMed

    Kim, HyunJin; Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.

  3. Bin packing problem solution through a deterministic weighted finite automaton

    NASA Astrophysics Data System (ADS)

    Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.

    2016-06-01

    In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.

  4. Probabilistic Cellular Automata

    PubMed Central

    Agapie, Alexandru; Giuclea, Marius

    2014-01-01

    Abstract Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case—connecting the probability of a configuration in the stationary distribution to its number of zero-one borders—the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata. PMID:24999557

  5. Probabilistic cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  6. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  7. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  8. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  9. Symbolic Dynamics and Grammatical Complexity

    NASA Astrophysics Data System (ADS)

    Hao, Bai-Lin; Zheng, Wei-Mou

    The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results

  10. Self-Organized Criticality and Scaling in Lifetime of Traffic Jams

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-01-01

    The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime of traffic jams scales as \\cong Lν with ν=0.65±0.04. It is shown that the cumulative distribution Nm (L) of lifetimes satisfies the finite-size scaling form Nm (L) \\cong L-1 f(m/Lν).

  11. Rough Finite State Automata and Rough Languages

    NASA Astrophysics Data System (ADS)

    Arulprakasam, R.; Perumal, R.; Radhakrishnan, M.; Dare, V. R.

    2018-04-01

    Sumita Basu [1, 2] recently introduced the concept of a rough finite state (semi)automaton, rough grammar and rough languages. Motivated by the work of [1, 2], in this paper, we investigate some closure properties of rough regular languages and establish the equivalence between the classes of rough languages generated by rough grammar and the classes of rough regular languages accepted by rough finite automaton.

  12. Modelling robot's behaviour using finite automata

    NASA Astrophysics Data System (ADS)

    Janošek, Michal; Žáček, Jaroslav

    2017-07-01

    This paper proposes a model of a robot's behaviour described by finite automata. We split robot's knowledge into several knowledge bases which are used by the inference mechanism of the robot's expert system to make a logic deduction. Each knowledgebase is dedicated to the particular behaviour domain and the finite automaton helps us switching among these knowledge bases with the respect of actual situation. Our goal is to simplify and reduce complexity of one big knowledgebase splitting it into several pieces. The advantage of this model is that we can easily add new behaviour by adding new knowledgebase and add this behaviour into the finite automaton and define necessary states and transitions.

  13. Folding Automaton for Trees

    NASA Astrophysics Data System (ADS)

    Subashini, N.; Thiagarajan, K.

    2018-04-01

    In this paper we observed the definition of folding technique in graph theory and we derived the corresponding automaton for trees. Also derived some propositions on symmetrical structure tree, non-symmetrical structure tree, point symmetrical structure tree, edge symmetrical structure tree along with finite number of points. This approach provides to derive one edge after n’ number of foldings.

  14. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator.

    PubMed

    Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi

    2012-01-01

    The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.

  15. Numerical simulation of dendrite growth in nickel-based superalloy and validated by in-situ observation using high temperature confocal laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Yan, Xuewei; Xu, Qingyan; Liu, Baicheng

    2017-12-01

    Dendritic structures are the predominant microstructural constituents of nickel-based superalloys, an understanding of the dendrite growth is required in order to obtain the desirable microstructure and improve the performance of castings. For this reason, numerical simulation method and an in-situ observation technology by employing high temperature confocal laser scanning microscopy (HT-CLSM) were used to investigate dendrite growth during solidification process. A combined cellular automaton-finite difference (CA-FD) model allowing for the prediction of dendrite growth of binary alloys was developed. The algorithm of cells capture was modified, and a deterministic cellular automaton (DCA) model was proposed to describe neighborhood tracking. The dendrite and detail morphology, especially hundreds of dendrites distribution at a large scale and three-dimensional (3-D) polycrystalline growth, were successfully simulated based on this model. The dendritic morphologies of samples before and after HT-CLSM were both observed by optical microscope (OM) and scanning electron microscope (SEM). The experimental observations presented a reasonable agreement with the simulation results. It was also found that primary or secondary dendrite arm spacing, and segregation pattern were significantly influenced by dendrite growth. Furthermore, the directional solidification (DS) dendritic evolution behavior and detail morphology were also simulated based on the proposed model, and the simulation results also agree well with experimental results.

  16. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  17. New cellular automaton designed to simulate geometration in gel electrophoresis

    NASA Astrophysics Data System (ADS)

    Krawczyk, M. J.; Kułakowski, K.; Maksymowicz, A. Z.

    2002-08-01

    We propose a new kind of cellular automaton to simulate transportation of molecules of DNA through agarose gel. Two processes are taken into account: reptation at strong electric field E, described in the particle model, and geometration, i.e. subsequent hookings and releases of long molecules at and from gel fibres. The automaton rules are deterministic and they are designed to describe both processes within one unified approach. Thermal fluctuations are not taken into account. The number of simultaneous hookings is limited by the molecule length. The features of the automaton are: (i) the size of the cell neighbourhood for the automaton rule varies dynamically, from nearest neighbors to the entire molecule; (ii) the length of the time step is determined at each step according to dynamic rules. Calculations are made up to N=244 reptons in a molecule. Two subsequent stages of the motion are found. Firstly, an initial set of random configurations of molecules is transformed into a more ordered phase, where most molecules are elongated along the applied field direction. After some transient time, the mobility μ reaches a constant value. Then, it varies with N as 1/ N for long molecules. The band dispersion varies with time t approximately as Nt1/2. Our results indicate that the well-known plateau of the mobility μ vs. N does not hold at large electric fields.

  18. Soliton cellular automaton associated with Dn(1)-crystal B2,s

    NASA Astrophysics Data System (ADS)

    Misra, Kailash C.; Wilson, Evan A.

    2013-04-01

    A solvable vertex model in ferromagnetic regime gives rise to a soliton cellular automaton which is a discrete dynamical system in which site variables take on values in a finite set. We study the scattering of a class of soliton cellular automata associated with the U_q(D_n^{(1)})-perfect crystal B2, s. We calculate the combinatorial R matrix for all elements of B2, s ⊗ B2, 1. In particular, we show that the scattering rule for our soliton cellular automaton can be identified with the combinatorial R matrix for U_q(A_1^{(1)}) oplus U_q(D_{n-2}^{(1)})-crystals.

  19. How synapses can enhance sensibility of a neural network

    NASA Astrophysics Data System (ADS)

    Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.

    2018-02-01

    In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.

  20. Production Management System for AMS Computing Centres

    NASA Astrophysics Data System (ADS)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  1. On Matrices, Automata, and Double Counting

    NASA Astrophysics Data System (ADS)

    Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin

    Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.

  2. Stimulus-Response Theory of Finite Automata, Technical Report No. 133.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    The central aim of this paper and its projected successors is to prove in detail that stimulus-response theory, or at least a mathematically precise version, can give an account of the learning of many phrase-structure grammars. Section 2 is concerned with standard notions of finite and probabilistic automata. An automaton is defined as a device…

  3. Symbolic Dynamics, Flower Automata and Infinite Traces

    NASA Astrophysics Data System (ADS)

    Foryś, Wit; Oprocha, Piotr; Bakalarski, Slawomir

    Considering a finite alphabet as a set of allowed instructions, we can identify finite words with basic actions or programs. Hence infinite paths on a flower automaton can represent order in which these programs are executed and a flower shift related with it represents list of instructions to be executed at some mid-point of the computation.

  4. Inclusion of Multiple Functional Types in an Automaton Model of Bioturbation and Their Effects on Sediments Properties

    DTIC Science & Technology

    2007-09-30

    if the traditional models adequately parameterize and characterize the actual mixing. As an example of the application of this method , we have...2) Deterministic Modelling Results. As noted above, we are working on a stochastic method of modelling transient and short-lived tracers...heterogeneity. RELATED PROJECTS We have worked in collaboration with Peter Jumars (Univ. Maine), and his PhD student Kelley Dorgan, who are measuring

  5. An outline of cellular automaton universe via cosmological KdV equation

    NASA Astrophysics Data System (ADS)

    Christianto, V.; Smarandache, F.; Umniyati, Y.

    2018-03-01

    It has been known for long time that the cosmic sound wave was there since the early epoch of the Universe. Signatures of its existence are abound. However, such a sound wave model of cosmology is rarely developed fully into a complete framework. This paper can be considered as our second attempt towards such a complete description of the Universe based on soliton wave solution of cosmological KdV equation. Then we advance further this KdV equation by virtue of Cellular Automaton method to solve the PDEs. We submit wholeheartedly Robert Kuruczs hypothesis that Big Bang should be replaced with a finite cellular automaton universe with no expansion [4][5]. Nonetheless, we are fully aware that our model is far from being complete, but it appears the proposed cellular automaton model of the Universe is very close in spirit to what Konrad Zuse envisaged long time ago. It is our hope that the new proposed method can be verified with observation data. But we admit that our model is still in its infancy, more researches are needed to fill all the missing details.

  6. Simulation of glioblastoma multiforme (GBM) tumor cells using ising model on the Creutz Cellular Automaton

    NASA Astrophysics Data System (ADS)

    Züleyha, Artuç; Ziya, Merdan; Selçuk, Yeşiltaş; Kemal, Öztürk M.; Mesut, Tez

    2017-11-01

    Computational models for tumors have difficulties due to complexity of tumor nature and capacities of computational tools, however, these models provide visions to understand interactions between tumor and its micro environment. Moreover computational models have potential to develop strategies for individualized treatments for cancer. To observe a solid brain tumor, glioblastoma multiforme (GBM), we present a two dimensional Ising Model applied on Creutz cellular automaton (CCA). The aim of this study is to analyze avascular spherical solid tumor growth, considering transitions between non tumor cells and cancer cells are like phase transitions in physical system. Ising model on CCA algorithm provides a deterministic approach with discrete time steps and local interactions in position space to view tumor growth as a function of time. Our simulation results are given for fixed tumor radius and they are compatible with theoretical and clinic data.

  7. Simulation of miniature endplate potentials in neuromuscular junctions by using a cellular automaton

    NASA Astrophysics Data System (ADS)

    Avella, Oscar Javier; Muñoz, José Daniel; Fayad, Ramón

    2008-01-01

    Miniature endplate potentials are recorded in the neuromuscular junction when the acetylcholine contents of one or a few synaptic vesicles are spontaneously released into the synaptic cleft. Since their discovery by Fatt and Katz in 1952, they have been among the paradigms in neuroscience. Those potentials are usually simulated by means of numerical approaches, such as Brownian dynamics, finite differences and finite element methods. Hereby we propose that diffusion cellular automata can be a useful alternative for investigating them. To illustrate this point, we simulate a miniature endplate potential by using experimental parameters. Our model reproduces the potential shape, amplitude and time course. Since our automaton is able to track the history and interactions of each single particle, it is very easy to introduce non-linear effects with little computational effort. This makes cellular automata excellent candidates for simulating biological reaction-diffusion processes, where no other external forces are involved.

  8. Biomolecular computers with multiple restriction enzymes.

    PubMed

    Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz

    2017-01-01

    The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann "bottleneck". Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro's group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases.

  9. Microcanonical model for interface formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucklidge, A.; Zaleski, S.

    1988-04-01

    We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.

  10. Biomolecular computers with multiple restriction enzymes

    PubMed Central

    Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz

    2017-01-01

    Abstract The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann “bottleneck”. Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro’s group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases. PMID:29064510

  11. Ambient Intelligence in Multimeda and Virtual Reality Environments for the rehabilitation

    NASA Astrophysics Data System (ADS)

    Benko, Attila; Cecilia, Sik Lanyi

    This chapter presents a general overview about the use of multimedia and virtual reality in rehabilitation and assistive and preventive healthcare. This chapter deals with multimedia, virtual reality applications based AI intended for use by medical doctors, nurses, special teachers and further interested persons. It describes methods how multimedia and virtual reality is able to assist their work. These include the areas how multimedia and virtual reality can help the patients everyday life and their rehabilitation. In the second part of the chapter we present the Virtual Therapy Room (VTR) a realized application for aphasic patients that was created for practicing communication and expressing emotions in a group therapy setting. The VTR shows a room that contains a virtual therapist and four virtual patients (avatars). The avatars are utilizing their knowledge base in order to answer the questions of the user providing an AI environment for the rehabilitation. The user of the VTR is the aphasic patient who has to solve the exercises. The picture that is relevant for the actual task appears on the virtual blackboard. Patient answers questions of the virtual therapist. Questions are about pictures describing an activity or an object in different levels. Patient can ask an avatar for answer. If the avatar knows the answer the avatars emotion changes to happy instead of sad. The avatar expresses its emotions in different dimensions. Its behavior, face-mimic, voice-tone and response also changes. The emotion system can be described as a deterministic finite automaton where places are emotion-states and the transition function of the automaton is derived from the input-response reaction of an avatar. Natural language processing techniques were also implemented in order to establish highquality human-computer interface windows for each of the avatars. Aphasic patients are able to interact with avatars via these interfaces. At the end of the chapter we visualize the possible future research field.

  12. All-DNA finite-state automata with finite memory

    PubMed Central

    Wang, Zhen-Gang; Elbaz, Johann; Remacle, F.; Levine, R. D.; Willner, Itamar

    2010-01-01

    Biomolecular logic devices can be applied for sensing and nano-medicine. We built three DNA tweezers that are activated by the inputs H+/OH-; ; nucleic acid linker/complementary antilinker to yield a 16-states finite-state automaton. The outputs of the automata are the configuration of the respective tweezers (opened or closed) determined by observing fluorescence from a fluorophore/quencher pair at the end of the arms of the tweezers. The system exhibits a memory because each current state and output depend not only on the source configuration but also on past states and inputs. PMID:21135212

  13. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  14. A novel time series link prediction method: Learning automata approach

    NASA Astrophysics Data System (ADS)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  15. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  16. Model Checking Temporal Logic Formulas Using Sticker Automata

    PubMed Central

    Feng, Changwei; Wu, Huanmei

    2017-01-01

    As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method. PMID:29119114

  17. Reconfigurability of behavioural specifications for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Schmidt, Klaus Werner

    2017-12-01

    Reconfigurable manufacturing systems (RMS) support flexibility in the product variety and the configuration of the manufacturing system itself in order to enable quick adjustments to new products and production requirements. As a consequence, an essential feature of RMS is their ability to rapidly modify the control strategy during run-time. In this paper, the particular problem of changing the specified operation of a RMS, whose logical behaviour is modelled as a finite state automaton, is addressed. The notion of reconfigurability of specifications (RoS) is introduced and it is shown that the stated reconfiguration problem can be formulated as a controlled language convergence problem. In addition, algorithms for the verification of RoS and the construction of a reconfiguration supervisor are proposed. The supervisor is realised in a modular way which facilitates the extension by new configurations. Finally, it is shown that a supremal nonblocking and controllable strict subautomaton of the plant automaton that fulfils RoS exists in case RoS is violated for the plant automaton itself and an algorithm for the computation of this strict subautomaton is presented. The developed concepts and results are illustrated by a manufacturing cell example.

  18. A cellular automaton - finite volume method for the simulation of dendritic and eutectic growth in binary alloys using an adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar

    2017-11-01

    A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.

  19. Exploring the concept of interaction computing through the discrete algebraic analysis of the Belousov-Zhabotinsky reaction.

    PubMed

    Dini, Paolo; Nehaniv, Chrystopher L; Egri-Nagy, Attila; Schilstra, Maria J

    2013-05-01

    Interaction computing (IC) aims to map the properties of integrable low-dimensional non-linear dynamical systems to the discrete domain of finite-state automata in an attempt to reproduce in software the self-organizing and dynamically stable properties of sub-cellular biochemical systems. As the work reported in this paper is still at the early stages of theory development it focuses on the analysis of a particularly simple chemical oscillator, the Belousov-Zhabotinsky (BZ) reaction. After retracing the rationale for IC developed over the past several years from the physical, biological, mathematical, and computer science points of view, the paper presents an elementary discussion of the Krohn-Rhodes decomposition of finite-state automata, including the holonomy decomposition of a simple automaton, and of its interpretation as an abstract positional number system. The method is then applied to the analysis of the algebraic properties of discrete finite-state automata derived from a simplified Petri net model of the BZ reaction. In the simplest possible and symmetrical case the corresponding automaton is, not surprisingly, found to contain exclusively cyclic groups. In a second, asymmetrical case, the decomposition is much more complex and includes five different simple non-abelian groups whose potential relevance arises from their ability to encode functionally complete algebras. The possible computational relevance of these findings is discussed and possible conclusions are drawn. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Predictability in cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius

    2014-01-01

    Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case.

  1. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  2. Towards the simplest hydrodynamic lattice-gas model.

    PubMed

    Boghosian, Bruce M; Love, Peter J; Meyer, David A

    2002-03-15

    It has been known since 1986 that it is possible to construct simple lattice-gas cellular automata whose hydrodynamics are governed by the Navier-Stokes equations in two dimensions. The simplest such model heretofore known has six bits of state per site on a triangular lattice. In this work, we demonstrate that it is possible to construct a model with only five bits of state per site on a Kagome lattice. Moreover, the model has a simple, deterministic set of collision rules and is easily implemented on a computer. In this work, we derive the equilibrium distribution function for this lattice-gas automaton and carry out the Chapman-Enskog analysis to determine the form of the Navier-Stokes equations.

  3. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  4. Quantum Locality, Rings a Bell?: Bell's Inequality Meets Local Reality and True Determinism

    NASA Astrophysics Data System (ADS)

    Sánchez-Kuntz, Natalia; Nahmad-Achar, Eduardo

    2018-01-01

    By assuming a deterministic evolution of quantum systems and taking realism into account, we carefully build a hidden variable theory for Quantum Mechanics (QM) based on the notion of ontological states proposed by 't Hooft (The cellular automaton interpretation of quantum mechanics, arXiv:1405.1548v3, 2015; Springer Open 185, https://doi.org/10.1007/978-3-319-41285-6, 2016). We view these ontological states as the ones embedded with realism and compare them to the (usual) quantum states that represent superpositions, viewing the latter as mere information of the system they describe. Such a deterministic model puts forward conditions for the applicability of Bell's inequality: the usual inequality cannot be applied to the usual experiments. We build a Bell-like inequality that can be applied to the EPR scenario and show that this inequality is always satisfied by QM. In this way we show that QM can indeed have a local interpretation, and thus meet with the causal structure imposed by the Theory of Special Relativity in a satisfying way.

  5. Finite driving rate and anisotropy effects in landslide modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piegari, E.; Cataudella, V.; Di Maio, R.

    2006-02-15

    In order to characterize landslide frequency-size distributions and individuate hazard scenarios and their possible precursors, we investigate a cellular automaton where the effects of a finite driving rate and the anisotropy are taken into account. The model is able to reproduce observed features of landslide events, such as power-law distributions, as experimentally reported. We analyze the key role of the driving rate and show that, as it is increased, a crossover from power-law to non-power-law behaviors occurs. Finally, a systematic investigation of the model on varying its anisotropy factors is performed and the full diagram of its dynamical behaviors ismore » presented.« less

  6. Predictability in Cellular Automata

    PubMed Central

    Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius

    2014-01-01

    Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case. PMID:25271778

  7. Reversible elementary cellular automaton with rule number 150 and periodic boundary conditions over 𝔽p

    NASA Astrophysics Data System (ADS)

    Martín Del Rey, A.; Rodríguez Sánchez, G.

    2015-03-01

    The study of the reversibility of elementary cellular automata with rule number 150 over the finite state set 𝔽p and endowed with periodic boundary conditions is done. The dynamic of such discrete dynamical systems is characterized by means of characteristic circulant matrices, and their analysis allows us to state that the reversibility depends on the number of cells of the cellular space and to explicitly compute the corresponding inverse cellular automata.

  8. Investigation of primary static recrystallization in a NiTiFe shape memory alloy subjected to cold canning compression using the coupling crystal plasticity finite element method with cellular automaton

    NASA Astrophysics Data System (ADS)

    Zhang, Yanqiu; Jiang, Shuyong; Hu, Li; Zhao, Yanan; Sun, Dong

    2017-10-01

    The behavior of primary static recrystallization (SRX) in a NiTiFe shape memory alloy (SMA) subjected to cold canning compression was investigated using the coupling crystal plasticity finite element method (CPFEM) with the cellular automaton (CA) method, where the distribution of the dislocation density and the deformed grain topology quantified by CPFEM were used as the input for the subsequent SRX simulation performed using the CA method. The simulation results were confirmed by the experimental ones in terms of microstructures, average grain size and recrystallization fraction, which indicates that the proposed coupling method is well able to describe the SRX behavior of the NiTiFe SMA. The results show that the dislocation density exhibits an inhomogeneous distribution in the deformed sample and the recrystallization nuclei mainly concentrate on zones where the dislocation density is relatively higher. An increase in the compressive deformation degree leads to an increase in nucleation rate and a decrease in grain boundary spaces in the compression direction, which reduces the growth spaces for the SRX nuclei and impedes their further growth. In addition, both the mechanisms of local grain refinement in the incomplete SRX and the influence of compressive deformation degree on the grain size of SRX were vividly illustrated by the corresponding physical models.

  9. Playing Tic-Tac-Toe with a Sugar-Based Molecular Computer.

    PubMed

    Elstner, M; Schiller, A

    2015-08-24

    Today, molecules can perform Boolean operations and circuits at a level of higher complexity. However, concatenation of logic gates and inhomogeneous inputs and outputs are still challenging tasks. Novel approaches for logic gate integration are possible when chemical programming and software programming are combined. Here it is shown that a molecular finite automaton based on the concatenated implication function (IMP) of a fluorescent two-component sugar probe via a wiring algorithm is able to play tic-tac-toe.

  10. Programmable and autonomous computing machine made of biomolecules

    PubMed Central

    Benenson, Yaakov; Paz-Elizur, Tamar; Adar, Rivka; Keinan, Ehud; Livneh, Zvi; Shapiro, Ehud

    2013-01-01

    Devices that convert information from one form into another according to a definite procedure are known as automata. One such hypothetical device is the universal Turing machine1, which stimulated work leading to the development of modern computers. The Turing machine and its special cases2, including finite automata3, operate by scanning a data tape, whose striking analogy to information-encoding biopolymers inspired several designs for molecular DNA computers4–8. Laboratory-scale computing using DNA and human-assisted protocols has been demonstrated9–15, but the realization of computing devices operating autonomously on the molecular scale remains rare16–20. Here we describe a programmable finite automaton comprising DNA and DNA-manipulating enzymes that solves computational problems autonomously. The automaton’s hardware consists of a restriction nuclease and ligase, the software and input are encoded by double-stranded DNA, and programming amounts to choosing appropriate software molecules. Upon mixing solutions containing these components, the automaton processes the input molecule via a cascade of restriction, hybridization and ligation cycles, producing a detectable output molecule that encodes the automaton’s final state, and thus the computational result. In our implementation 1012 automata sharing the same software run independently and in parallel on inputs (which could, in principle, be distinct) in 120 μl solution at room temperature at a combined rate of 109 transitions per second with a transition fidelity greater than 99.8%, consuming less than 10−10 W. PMID:11719800

  11. Deep Packet/Flow Analysis using GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Qian; Wu, Wenji; DeMar, Phil

    Deep packet inspection (DPI) faces severe performance challenges in high-speed networks (40/100 GE) as it requires a large amount of raw computing power and high I/O throughputs. Recently, researchers have tentatively used GPUs to address the above issues and boost the performance of DPI. Typically, DPI applications involve highly complex operations in both per-packet and per-flow data level, often in real-time. The parallel architecture of GPUs fits exceptionally well for per-packet network traffic processing. However, for stateful network protocols such as TCP, their data stream need to be reconstructed in a per-flow level to deliver a consistent content analysis. Sincemore » the flow-centric operations are naturally antiparallel and often require large memory space for buffering out-of-sequence packets, they can be problematic for GPUs, whose memory is normally limited to several gigabytes. In this work, we present a highly efficient GPU-based deep packet/flow analysis framework. The proposed design includes a purely GPU-implemented flow tracking and TCP stream reassembly. Instead of buffering and waiting for TCP packets to become in sequence, our framework process the packets in batch and uses a deterministic finite automaton (DFA) with prefix-/suffix- tree method to detect patterns across out-of-sequence packets that happen to be located in different batches. In conclusion, evaluation shows that our code can reassemble and forward tens of millions of packets per second and conduct a stateful signature-based deep packet inspection at 55 Gbit/s using an NVIDIA K40 GPU.« less

  12. A detailed experimental study of a DNA computer with two endonucleases.

    PubMed

    Sakowski, Sebastian; Krasiński, Tadeusz; Sarnik, Joanna; Blasiak, Janusz; Waldmajer, Jacek; Poplawski, Tomasz

    2017-07-14

    Great advances in biotechnology have allowed the construction of a computer from DNA. One of the proposed solutions is a biomolecular finite automaton, a simple two-state DNA computer without memory, which was presented by Ehud Shapiro's group at the Weizmann Institute of Science. The main problem with this computer, in which biomolecules carry out logical operations, is its complexity - increasing the number of states of biomolecular automata. In this study, we constructed (in laboratory conditions) a six-state DNA computer that uses two endonucleases (e.g. AcuI and BbvI) and a ligase. We have presented a detailed experimental verification of its feasibility. We described the effect of the number of states, the length of input data, and the nondeterminism on the computing process. We also tested different automata (with three, four, and six states) running on various accepted input words of different lengths such as ab, aab, aaab, ababa, and of an unaccepted word ba. Moreover, this article presents the reaction optimization and the methods of eliminating certain biochemical problems occurring in the implementation of a biomolecular DNA automaton based on two endonucleases.

  13. Cellular automata models for diffusion of information and highway traffic flow

    NASA Astrophysics Data System (ADS)

    Fuks, Henryk

    In the first part of this work we study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents degree of 'anticipatory driving'. We compare two driving strategies with identical maximum throughput: 'conservative' driving with high speed limit and 'anticipatory' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered. For rule 184, we present exact calculations of the order parameter in a transition from the moving phase to the jammed phase using the method of preimage counting, and use this result to construct a solution to the density classification problem. In the second part we propose a probabilistic cellular automaton model for the spread of innovations, rumors, news, etc., in a social system. We start from simple deterministic models, for which exact expressions for the density of adopters are derived. For a more realistic model, based on probabilistic cellular automata, we study the influence of a range of interaction R on the shape of the adoption curve. When the probability of adoption is proportional to the local density of adopters, and individuals can drop the innovation with some probability p, the system exhibits a second order phase transition. Critical line separating regions of parameter space in which asymptotic density of adopters is positive from the region where it is equal to zero converges toward the mean-field line when the range of the interaction increases. In a region between R=1 critical line and the mean-field line asymptotic density of adopters depends on R, becoming zero if R is too small (smaller than some critical value). This result demonstrates the importance of connectivity in diffusion of information. We also define a new class of automata networks which incorporates non-local interactions, and discuss its applicability in modeling of diffusion of innovations.

  14. Standoff Sensing of Electronic Systems

    DTIC Science & Technology

    2011-03-12

    74<�/M!N��./!!M/!N’ AR@’!017!1.Q617921.S!󈨘.17< 6M !N’ AA@’!017!1.Q617921.S!’./01.2’ 756>/.M!N’ A+@󈧫!2.9>52917Q8?89;8>9;92<’!282/6/72’ AB...called the value function. Sondik (1978) showed that, for a finite- transient deterministic policy 1, there exists a Markov partition B = B1 ∪ B2... transient deterministic policy. Sondik noted that an arbitrary policy Π is not likely to be finite- transient , and for it one can only construct a partition

  15. Finite-size effects and switching times for Moran process with mutation.

    PubMed

    DeVille, Lee; Galiardi, Meghan

    2017-04-01

    We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.

  16. Bi-SOC-states in one-dimensional random cellular automaton

    NASA Astrophysics Data System (ADS)

    Czechowski, Zbigniew; Budek, Agnieszka; Białecki, Mariusz

    2017-10-01

    Two statistically stationary states with power-law scaling of avalanches are found in a simple 1 D cellular automaton. Features of the fixed points, the spiral saddle and the saddle with index 1, are investigated. The migration of states of the automaton between these two self-organized criticality states is demonstrated during evolution of the system in computer simulations. The automaton, being a slowly driven system, can be applied as a toy model of earthquake supercycles.

  17. Quantum cloning by cellular automata

    NASA Astrophysics Data System (ADS)

    D'Ariano, G. M.; Macchiavello, C.; Rossi, M.

    2013-03-01

    We introduce a quantum cellular automaton that achieves approximate phase-covariant cloning of qubits. The automaton is optimized for 1→2N economical cloning. The use of the automaton for cloning allows us to exploit different foliations for improving the performance with given resources.

  18. Cellular Automaton Study of Hydrogen Porosity Evolution Coupled with Dendrite Growth During Solidification in the Molten Pool of Al-Cu Alloys

    NASA Astrophysics Data System (ADS)

    Gu, Cheng; Wei, Yanhong; Yu, Fengyi; Liu, Xiangbo; She, Lvbo

    2017-09-01

    Welding porosity defects significantly reduce the mechanical properties of welded joints. In this paper, the hydrogen porosity evolution coupled with dendrite growth during solidification in the molten pool of Al-4.0 wt pct Cu alloy was modeled and simulated. Three phases, including a liquid phase, a solid phase, and a gas phase, were considered in this model. The growth of dendrites and hydrogen gas pores was reproduced using a cellular automaton (CA) approach. The diffusion of solute and hydrogen was calculated using the finite difference method (FDM). Columnar and equiaxed dendrite growth with porosity evolution were simulated. Competitive growth between different dendrites and porosities was observed. Dendrite morphology was influenced by porosity formation near dendrites. After solidification, when the porosities were surrounded by dendrites, they could not escape from the liquid, and they made pores that existed in the welded joints. With the increase in the cooling rate, the average diameter of porosities decreased, and the average number of porosities increased. The average diameter of porosities and the number of porosities in the simulation results had the same trend as the experimental results.

  19. Numerical simulation of biofilm growth in flow channels using a cellular automaton approach coupled with a macro flow computation.

    PubMed

    Yamamoto, Takehiro; Ueda, Shuya

    2013-01-01

    Biofilm is a slime-like complex aggregate of microorganisms and their products, extracellular polymer substances, that grows on a solid surface. The growth phenomenon of biofilm is relevant to the corrosion and clogging of water pipes, the chemical processes in a bioreactor, and bioremediation. In these phenomena, the behavior of the biofilm under flow has an important role. Therefore, controlling the biofilm behavior in each process is important. To provide a computational tool for analyzing biofilm growth, the present study proposes a computational model for the simulation of biofilm growth in flows. This model accounts for the growth, decay, detachment and adhesion of biofilms. The proposed model couples the computation of the surrounding fluid flow, using the finite volume method, with the simulation of biofilm growth, using the cellular automaton approach, a relatively low-computational-cost method. Furthermore, a stochastic approach for considering the adhesion process is proposed. Numerical simulations for the biofilm growth on a planar wall and that in an L-shaped rectangular channel were carried out. A variety of biofilm structures were observed depending on the strength of the flow. Moreover, the importance of the detachment and adhesion processes was confirmed.

  20. CAFE simulation of columnar-to-equiaxed transition in Al-7wt%Si alloys directionally solidified under microgravity

    NASA Astrophysics Data System (ADS)

    Liu, D. R.; Mangelinck-Noël, N.; Gandin, Ch-A.; Zimmermann, G.; Sturz, L.; Nguyen Thi, H.; Billia, B.

    2016-03-01

    A two-dimensional multi-scale cellular automaton - finite element (CAFE) model is used to simulate grain structure evolution and microsegregation formation during solidification of refined Al-7wt%Si alloys under microgravity. The CAFE simulations are first qualitatively compared with the benchmark experimental data under microgravity. Qualitative agreement is obtained for the position of columnar to equiaxed transition (CET) and the CET transition mode (sharp or progressive). Further comparisons of the distributions of grain elongation factor and equivalent diameter are conducted and reveal a fair quantitative agreement.

  1. Multi-layer composite mechanical modeling for the inhomogeneous biofilm mechanical behavior.

    PubMed

    Wang, Xiaoling; Han, Jingshi; Li, Kui; Wang, Guoqing; Hao, Mudong

    2016-08-01

    Experiments showed that bacterial biofilms are heterogeneous, for example, the density, the diffusion coefficient, and mechanical properties of the biofilm are different along the biofilm thickness. In this paper, we establish a multi-layer composite model to describe the biofilm mechanical inhomogeneity based on unified multiple-component cellular automaton (UMCCA) model. By using our model, we develop finite element simulation procedure for biofilm tension experiment. The failure limit and biofilm extension displacement obtained from our model agree well with experimental measurements. This method provides an alternative theory to study the mechanical inhomogeneity in biological materials.

  2. Game of life on phyllosilicates: Gliders, oscillators and still life

    NASA Astrophysics Data System (ADS)

    Adamatzky, Andrew

    2013-10-01

    A phyllosilicate is a sheet of silicate tetrahedra bound by basal oxygens. A phyllosilicate automaton is a regular network of finite state machines - silicon nodes and oxygen nodes - which mimics structure of the phyllosilicate. A node takes states 0 and 1. Each node updates its state in discrete time depending on a sum of states of its three (silicon) or six (oxygen) neighbours. Phyllosilicate automata exhibit localisations attributed to Conway's Game of Life: gliders, oscillators, still lifes, and a glider gun. Configurations and behaviour of typical localisations, and interactions between the localisations are illustrated.

  3. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  4. Reliable Cellular Automata with Self-Organization

    NASA Astrophysics Data System (ADS)

    Gács, Peter

    2001-04-01

    In a probabilistic cellular automaton in which all local transitions have positive probability, the problem of keeping a bit of information indefinitely is nontrivial, even in an infinite automaton. Still, there is a solution in 2 dimensions, and this solution can be used to construct a simple 3-dimensional discrete-time universal fault-tolerant cellular automaton. This technique does not help much to solve the following problems: remembering a bit of information in 1 dimension; computing in dimensions lower than 3; computing in any dimension with non-synchronized transitions. Our more complex technique organizes the cells in blocks that perform a reliable simulation of a second (generalized) cellular automaton. The cells of the latter automaton are also organized in blocks, simulating even more reliably a third automaton, etc. Since all this (a possibly infinite hierarchy) is organized in "software," it must be under repair all the time from damage caused by errors. A large part of the problem is essentially self-stabilization recovering from a mess of arbitrary size and content. The present paper constructs an asynchronous one-dimensional fault-tolerant cellular automaton, with the further feature of "self-organization." The latter means that unless a large amount of input information must be given, the initial configuration can be chosen homogeneous.

  5. A hybrid finite-element and cellular-automaton framework for modeling 3D microstructure of Ti–6Al–4V alloy during solid–solid phase transformation in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Chen, Shaohua; Xu, Yaopengxiao; Jiao, Yang

    2018-06-01

    Additive manufacturing such as selective laser sintering and electron beam melting has become a popular technique which enables one to build near-net-shape product from packed powders. The performance and properties of the manufactured product strongly depends on its material microstructure, which is in turn determined by the processing conditions including beam power density, spot size, scanning speed and path etc. In this paper, we develop a computational framework that integrates the finite element method (FEM) and cellular automaton (CA) simulation to model the 3D microstructure of additively manufactured Ti–6Al–4V alloy, focusing on the β → α + β transition pathway in a consolidated alloy region as the power source moves away from this region. Specifically, the transient temperature field resulted from a scanning laser/electron beam following a zig-zag path is first obtained by solving nonlinear heat transfer equations using the FEM. Next, a CA model for the β → α + β phase transformation in the consolidated alloy is developed which explicitly takes into account the temperature dependent heterogeneous nucleation and anisotropic growth of α grains from the parent β phase field. We verify our model by reproducing the overall transition kinetics predicted by the Johnson–Mehl–Avrami–Kolmogorov theory under a typical processing condition and by quantitatively comparing our simulation results with available experimental data. The utility of the model is further demonstrated by generating large-field realistic 3D alloy microstructures for subsequent structure-sensitive micro-mechanical analysis. In addition, we employ our model to generate a wide spectrum of alloy microstructures corresponding to different processing conditions for establishing quantitative process-structure relations for the system.

  6. Counterfactuals cannot count: a rejoinder to David Chalmers.

    PubMed

    Bishop, Mark

    2002-12-01

    The initial argument presented herein is not significantly original--it is a simple reflection upon a notion of computation originally developed by Putnam (Putnam 1988; see also Searle, 1990) and criticised by Chalmers et al. (Chalmers, 1994; 1996a, b; see also the special issue, What is Computation?, in Minds and Machines, 4:4, November 1994). In what follows, instead of seeking to justify Putnam's conclusion that every open system implements every Finite State Automaton (FSA) and hence that psychological states of the brain cannot be functional states of a computer, I will establish the weaker result that, over a finite time window every open system implements the trace of FSA Q, as it executes program (P) on input (I). If correct the resulting bold philosophical claim is that phenomenal states--such as feelings and visual experiences--can never be understood or explained functionally. Copyright 2002 Elsevier Science (USA)

  7. The MATCHIT Automaton: Exploiting Compartmentalization for the Synthesis of Branched Polymers

    PubMed Central

    Weyland, Mathias S.; Fellermann, Harold; Hadorn, Maik; Sorek, Daniel; Lancet, Doron; Rasmussen, Steen; Füchslin, Rudolf M.

    2013-01-01

    We propose an automaton, a theoretical framework that demonstrates how to improve the yield of the synthesis of branched chemical polymer reactions. This is achieved by separating substeps of the path of synthesis into compartments. We use chemical containers (chemtainers) to carry the substances through a sequence of fixed successive compartments. We describe the automaton in mathematical terms and show how it can be configured automatically in order to synthesize a given branched polymer target. The algorithm we present finds an optimal path of synthesis in linear time. We discuss how the automaton models compartmentalized structures found in cells, such as the endoplasmic reticulum and the Golgi apparatus, and we show how this compartmentalization can be exploited for the synthesis of branched polymers such as oligosaccharides. Lastly, we show examples of artificial branched polymers and discuss how the automaton can be configured to synthesize them with maximal yield. PMID:24489601

  8. A stochastic-field description of finite-size spiking neural networks

    PubMed Central

    Longtin, André

    2017-01-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447

  9. Efficient Algorithms for Handling Nondeterministic Automata

    NASA Astrophysics Data System (ADS)

    Vojnar, Tomáš

    Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.

  10. Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing.

    PubMed

    Yilmaz, Ozgur

    2015-12-01

    This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisio, Alessandro; D’Ariano, Giacomo Mauro; Tosini, Alessandro, E-mail: alessandro.tosini@unipv.it

    We present a quantum cellular automaton model in one space-dimension which has the Dirac equation as emergent. This model, a discrete-time and causal unitary evolution of a lattice of quantum systems, is derived from the assumptions of homogeneity, parity and time-reversal invariance. The comparison between the automaton and the Dirac evolutions is rigorously set as a discrimination problem between unitary channels. We derive an exact lower bound for the probability of error in the discrimination as an explicit function of the mass, the number and the momentum of the particles, and the duration of the evolution. Computing this bound withmore » experimentally achievable values, we see that in that regime the QCA model cannot be discriminated from the usual Dirac evolution. Finally, we show that the evolution of one-particle states with narrow-band in momentum can be efficiently simulated by a dispersive differential equation for any regime. This analysis allows for a comparison with the dynamics of wave-packets as it is described by the usual Dirac equation. This paper is a first step in exploring the idea that quantum field theory could be grounded on a more fundamental quantum cellular automaton model and that physical dynamics could emerge from quantum information processing. In this framework, the discretization is a central ingredient and not only a tool for performing non-perturbative calculation as in lattice gauge theory. The automaton model, endowed with a precise notion of local observables and a full probabilistic interpretation, could lead to a coherent unification of a hypothetical discrete Planck scale with the usual Fermi scale of high-energy physics. - Highlights: • The free Dirac field in one space dimension as a quantum cellular automaton. • Large scale limit of the automaton and the emergence of the Dirac equation. • Dispersive differential equation for the evolution of smooth states on the automaton. • Optimal discrimination between the automaton evolution and the Dirac equation.« less

  12. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  13. Supervisory Control of Discrete Event Systems Modeled by Mealy Automata with Nondeterministic Output Functions

    NASA Astrophysics Data System (ADS)

    Ushio, Toshimitsu; Takai, Shigemasa

    Supervisory control is a general framework of logical control of discrete event systems. A supervisor assigns a set of control-disabled controllable events based on observed events so that the controlled discrete event system generates specified languages. In conventional supervisory control, it is assumed that observed events are determined by internal events deterministically. But, this assumption does not hold in a discrete event system with sensor errors and a mobile system, where each observed event depends on not only an internal event but also a state just before the occurrence of the internal event. In this paper, we model such a discrete event system by a Mealy automaton with a nondeterministic output function. We introduce two kinds of supervisors: one assigns each control action based on a permissive policy and the other based on an anti-permissive one. We show necessary and sufficient conditions for the existence of each supervisor. Moreover, we discuss the relationship between the supervisors in the case that the output function is determinisitic.

  14. Generalized Detectability for Discrete Event Systems

    PubMed Central

    Shu, Shaolong; Lin, Feng

    2011-01-01

    In our previous work, we investigated detectability of discrete event systems, which is defined as the ability to determine the current and subsequent states of a system based on observation. For different applications, we defined four types of detectabilities: (weak) detectability, strong detectability, (weak) periodic detectability, and strong periodic detectability. In this paper, we extend our results in three aspects. (1) We extend detectability from deterministic systems to nondeterministic systems. Such a generalization is necessary because there are many systems that need to be modeled as nondeterministic discrete event systems. (2) We develop polynomial algorithms to check strong detectability. The previous algorithms are based on observer whose construction is of exponential complexity, while the new algorithms are based on a new automaton called detector. (3) We extend detectability to D-detectability. While detectability requires determining the exact state of a system, D-detectability relaxes this requirement by asking only to distinguish certain pairs of states. With these extensions, the theory on detectability of discrete event systems becomes more applicable in solving many practical problems. PMID:21691432

  15. An improved cellular automaton method to model multispecies biofilms.

    PubMed

    Tang, Youneng; Valocchi, Albert J

    2013-10-01

    Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Autonomous molecular cascades for evaluation of cell surfaces

    NASA Astrophysics Data System (ADS)

    Rudchenko, Maria; Taylor, Steven; Pallavi, Payal; Dechkovskaia, Alesia; Khan, Safana; Butler, Vincent P., Jr.; Rudchenko, Sergei; Stojanovic, Milan N.

    2013-08-01

    Molecular automata are mixtures of molecules that undergo precisely defined structural changes in response to sequential interactions with inputs. Previously studied nucleic acid-based automata include game-playing molecular devices (MAYA automata) and finite-state automata for the analysis of nucleic acids, with the latter inspiring circuits for the analysis of RNA species inside cells. Here, we describe automata based on strand-displacement cascades directed by antibodies that can analyse cells by using their surface markers as inputs. The final output of a molecular automaton that successfully completes its analysis is the presence of a unique molecular tag on the cell surface of a specific subpopulation of lymphocytes within human blood cells.

  17. Cellular automatons applied to gas dynamic problems

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Coopersmith, Robert M.; Mclachlan, B. G.

    1987-01-01

    This paper compares the results of a relatively new computational fluid dynamics method, cellular automatons, with experimental data and analytical results. This technique has been shown to qualitatively predict fluidlike behavior; however, there have been few published comparisons with experiment or other theories. Comparisons are made for a one-dimensional supersonic piston problem, Stokes first problem, and the flow past a normal flat plate. These comparisons are used to assess the ability of the method to accurately model fluid dynamic behavior and to point out its limitations. Reasonable results were obtained for all three test cases, but the fundamental limitations of cellular automatons are numerous. It may be misleading, at this time, to say that cellular automatons are a computationally efficient technique. Other methods, based on continuum or kinetic theory, would also be very efficient if as little of the physics were included.

  18. A Cellular Automaton / Finite Element model for predicting grain texture development in galvanized coatings

    NASA Astrophysics Data System (ADS)

    Guillemot, G.; Avettand-Fènoël, M.-N.; Iosta, A.; Foct, J.

    2011-01-01

    Hot-dipping galvanizing process is a widely used and efficient way to protect steel from corrosion. We propose to master the microstructure of zinc grains by investigating the relevant process parameters. In order to improve the texture of this coating, we model grain nucleation and growth processes and simulate the zinc solid phase development. A coupling scheme model has been applied with this aim. This model improves a previous two-dimensional model of the solidification process. It couples a cellular automaton (CA) approach and a finite element (FE) method. CA grid and FE mesh are superimposed on the same domain. The grain development is simulated at the micro-scale based on the CA grid. A nucleation law is defined using a Gaussian probability and a random set of nucleating cells. A crystallographic orientation is defined for each one with a choice of Euler's angle (Ψ,θ,φ). A small growing shape is then associated to each cell in the mushy domain and a dendrite tip kinetics is defined using the model of Kurz [2]. The six directions of basal plane and the two perpendicular directions develop in each mushy cell. During each time step, cell temperature and solid fraction are then determined at micro-scale using the enthalpy conservation relation and variations are reassigned at macro-scale. This coupling scheme model enables to simulate the three-dimensional growing kinetics of the zinc grain in a two-dimensional approach. Grain structure evolutions for various cooling times have been simulated. Final grain structure has been compared to EBSD measurements. We show that the preferentially growth of dendrite arms in the basal plane of zinc grains is correctly predicted. The described coupling scheme model could be applied for simulated other product or manufacturing processes. It constitutes an approach gathering both micro and macro scale models.

  19. Learning to generate combinatorial action sequences utilizing the initial sensitivity of deterministic dynamical systems.

    PubMed

    Nishimoto, Ryu; Tani, Jun

    2004-09-01

    This study shows how sensory-action sequences of imitating finite state machines (FSMs) can be learned by utilizing the deterministic dynamics of recurrent neural networks (RNNs). Our experiments indicated that each possible combinatorial sequence can be recalled by specifying its respective initial state value and also that fractal structures appear in this initial state mapping after the learning converges. We also observed that the sequences of mimicking FSMs are encoded utilizing the transient regions rather than the invariant sets of the evolved dynamical systems of the RNNs.

  20. Nanowire nanocomputer as a finite-state machine.

    PubMed

    Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F; Ellenbogen, James C; Lieber, Charles M

    2014-02-18

    Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom-up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future.

  1. Nanowire nanocomputer as a finite-state machine

    PubMed Central

    Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F.; Ellenbogen, James C.; Lieber, Charles M.

    2014-01-01

    Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom–up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future. PMID:24469812

  2. Traffic dynamics of an on-ramp system with a cellular automaton model

    NASA Astrophysics Data System (ADS)

    Li, Xin-Gang; Gao, Zi-You; Jia, Bin; Jiang, Rui

    2010-06-01

    This paper uses the cellular automaton model to study the dynamics of traffic flow around an on-ramp with an acceleration lane. It adopts a parameter, which can reflect different lane-changing behaviour, to represent the diversity of driving behaviour. The refined cellular automaton model is used to describe the lower acceleration rate of a vehicle. The phase diagram and the capacity of the on-ramp system are investigated. The simulation results show that in the single cell model, the capacity of the on-ramp system will stay at the highest flow of a one lane system when the driver is moderate and careful; it will be reduced when the driver is aggressive. In the refined cellular automaton model, the capacity is always reduced even when the driver is careful. It proposes that the capacity drop of the on-ramp system is caused by aggressive lane-changing behaviour and lower acceleration rate.

  3. Stochastic gain in finite populations

    NASA Astrophysics Data System (ADS)

    Röhl, Torsten; Traulsen, Arne; Claussen, Jens Christian; Schuster, Heinz Georg

    2008-08-01

    Flexible learning rates can lead to increased payoffs under the influence of noise. In a previous paper [Traulsen , Phys. Rev. Lett. 93, 028701 (2004)], we have demonstrated this effect based on a replicator dynamics model which is subject to external noise. Here, we utilize recent advances on finite population dynamics and their connection to the replicator equation to extend our findings and demonstrate the stochastic gain effect in finite population systems. Finite population dynamics is inherently stochastic, depending on the population size and the intensity of selection, which measures the balance between the deterministic and the stochastic parts of the dynamics. This internal noise can be exploited by a population using an appropriate microscopic update process, even if learning rates are constant.

  4. Survival of mutations arising during invasions.

    PubMed

    Miller, Judith R

    2010-03-01

    When a neutral mutation arises in an invading population, it quickly either dies out or 'surfs', i.e. it comes to occupy almost all the habitat available at its time of origin. Beneficial mutations can also surf, as can deleterious mutations over finite time spans. We develop descriptive statistical models that quantify the relationship between the probability that a mutation will surf and demographic parameters for a cellular automaton model of surfing. We also provide a simple analytic model that performs well at predicting the probability of surfing for neutral and beneficial mutations in one dimension. The results suggest that factors - possibly including even abiotic factors - that promote invasion success may also increase the probability of surfing and associated adaptive genetic change, conditioned on such success.

  5. Phase transitions in coupled map lattices and in associated probabilistic cellular automata.

    PubMed

    Just, Wolfram

    2006-10-01

    Analytical tools are applied to investigate piecewise linear coupled map lattices in terms of probabilistic cellular automata. The so-called disorder condition of probabilistic cellular automata is closely related with attracting sets in coupled map lattices. The importance of this condition for the suppression of phase transitions is illustrated by spatially one-dimensional systems. Invariant densities and temporal correlations are calculated explicitly. Ising type phase transitions are found for one-dimensional coupled map lattices acting on repelling sets and for a spatially two-dimensional Miller-Huse-like system with stable long time dynamics. Critical exponents are calculated within a finite size scaling approach. The relevance of detailed balance of the resulting probabilistic cellular automaton for the critical behavior is pointed out.

  6. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  7. The detection and stabilisation of limit cycle for deterministic finite automata

    NASA Astrophysics Data System (ADS)

    Han, Xiaoguang; Chen, Zengqiang; Liu, Zhongxin; Zhang, Qing

    2018-04-01

    In this paper, the topological structure properties of deterministic finite automata (DFA), under the framework of the semi-tensor product of matrices, are investigated. First, the dynamics of DFA are converted into a new algebraic form as a discrete-time linear system by means of Boolean algebra. Using this algebraic description, the approach of calculating the limit cycles of different lengths is given. Second, we present two fundamental concepts, namely, domain of attraction of limit cycle and prereachability set. Based on the prereachability set, an explicit solution of calculating domain of attraction of a limit cycle is completely characterised. Third, we define the globally attractive limit cycle, and then the necessary and sufficient condition for verifying whether all state trajectories of a DFA enter a given limit cycle in a finite number of transitions is given. Fourth, the problem of whether a DFA can be stabilised to a limit cycle by the state feedback controller is discussed. Criteria for limit cycle-stabilisation are established. All state feedback controllers which implement the minimal length trajectories from each state to the limit cycle are obtained by using the proposed algorithm. Finally, an illustrative example is presented to show the theoretical results.

  8. From Large Deviations to Semidistances of Transport and Mixing: Coherence Analysis for Finite Lagrangian Data

    NASA Astrophysics Data System (ADS)

    Koltai, Péter; Renger, D. R. Michiel

    2018-06-01

    One way to analyze complicated non-autonomous flows is through trying to understand their transport behavior. In a quantitative, set-oriented approach to transport and mixing, finite time coherent sets play an important role. These are time-parametrized families of sets with unlikely transport to and from their surroundings under small or vanishing random perturbations of the dynamics. Here we propose, as a measure of transport and mixing for purely advective (i.e., deterministic) flows, (semi)distances that arise under vanishing perturbations in the sense of large deviations. Analogously, for given finite Lagrangian trajectory data we derive a discrete-time-and-space semidistance that comes from the "best" approximation of the randomly perturbed process conditioned on this limited information of the deterministic flow. It can be computed as shortest path in a graph with time-dependent weights. Furthermore, we argue that coherent sets are regions of maximal farness in terms of transport and mixing, and hence they occur as extremal regions on a spanning structure of the state space under this semidistance—in fact, under any distance measure arising from the physical notion of transport. Based on this notion, we develop a tool to analyze the state space (or the finite trajectory data at hand) and identify coherent regions. We validate our approach on idealized prototypical examples and well-studied standard cases.

  9. The stochastic energy-Casimir method

    NASA Astrophysics Data System (ADS)

    Arnaudon, Alexis; Ganaba, Nader; Holm, Darryl D.

    2018-04-01

    In this paper, we extend the energy-Casimir stability method for deterministic Lie-Poisson Hamiltonian systems to provide sufficient conditions for stability in probability of stochastic dynamical systems with symmetries. We illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top, and the compressible Euler equation in two dimensions. The main result is that stable deterministic equilibria remain stable in probability up to a certain stopping time that depends on the amplitude of the noise for finite-dimensional systems and on the amplitude of the spatial derivative of the noise for infinite-dimensional systems. xml:lang="fr"

  10. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk

    2016-06-08

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  11. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2016-06-01

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  12. Material Implementation of Hyperincursive Field on Slime Mold Computer

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Gunji, Yukio-Pegio

    2004-08-01

    "Elementary Conflictable Cellular Automaton (ECCA)" was introduced by Aono and Gunji as a problematic computational syntax embracing the non-deterministic/non-algorithmic property due to its hyperincursivity and nonlocality. Although ECCA's hyperincursive evolution equation indicates the occurrence of the deadlock/infinite-loop, we do not consider that this problem declares the fundamental impossibility of implementing ECCA materially. Dubois proposed to call a computing system where uncertainty/contradiction occurs "the hyperincursive field". In this paper we introduce a material implementation of the hyperincursive field by using plasmodia of the true slime mold Physarum polycephalum. The amoeboid organism is adopted as a computing media of ECCA slime mold computer (ECCA-SMC) mainly because; it is a parallel non-distributed system whose locally branched tips (components) can act in parallel with asynchronism and nonlocal correlation. A notable characteristic of ECCA-SMC is that a cell representing a spatio-temporal segment of computation is occupied (overlapped) redundantly by multiple spatially adjacent computing operations and by temporally successive computing events. The overlapped time representation may contribute to the progression of discussions on unconventional notions of the time.

  13. Pitting corrosion as a mixed system: coupled deterministic-probabilistic simulation of pit growth

    NASA Astrophysics Data System (ADS)

    Ibrahim, Israr B. M.; Fonna, S.; Pidaparti, R.

    2018-05-01

    Stochastic behavior of pitting corrosion poses a unique challenge in its computational analysis. However, it also stems from electrochemical activity causing general corrosion. In this paper, a framework for corrosion pit growth simulation based on the coupling of the Cellular Automaton (CA) and Boundary Element Methods (BEM) is presented. The framework assumes that pitting corrosion is controlled by electrochemical activity inside the pit cavity. The BEM provides the prediction of electrochemical activity given the geometrical data and polarization curves, while the CA is used to simulate the evolution of pit shapes based on electrochemical activity provided by BEM. To demonstrate the methodology, a sample case of local corrosion cells formed in pitting corrosion with varied dimensions and polarization functions is considered. Results show certain shapes tend to grow in certain types of environments. Some pit shapes appear to pose a higher risk by being potentially significant stress raisers or potentially increasing the rate of corrosion under the surface. Furthermore, these pits are comparable to commonly observed pit shapes in general corrosion environments.

  14. Thermodynamics of quasideterministic digital computers

    NASA Astrophysics Data System (ADS)

    Chu, Dominique

    2018-02-01

    A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.

  15. Signal Waveform Detection with Statistical Automaton for Internet and Web Service Streaming

    PubMed Central

    Liu, Yiming; Huang, Nai-Lun; Zeng, Fufu; Lin, Fang-Ying

    2014-01-01

    In recent years, many approaches have been suggested for Internet and web streaming detection. In this paper, we propose an approach to signal waveform detection for Internet and web streaming, with novel statistical automatons. The system records network connections over a period of time to form a signal waveform and compute suspicious characteristics of the waveform. Network streaming according to these selected waveform features by our newly designed Aho-Corasick (AC) automatons can be classified. We developed two versions, that is, basic AC and advanced AC-histogram waveform automata, and conducted comprehensive experimentation. The results confirm that our approach is feasible and suitable for deployment. PMID:25032231

  16. An Automaton Rover for Extreme Environments: Rethinking an Approach to Surface Mobility

    NASA Astrophysics Data System (ADS)

    Sauder, J.; Hilgemman, E.; Stack, K.; Kawata, J.; Parness, A.; Johnson, M.

    2017-11-01

    An Automaton Rover for Extreme Environments (AREE) enables long duration in-situ mobility on the surface of Venus through a simplified design and robust mechanisms. The goal is to design a rover capable of operating for months on the surface of Venus.

  17. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  18. Survival of mutations arising during invasions

    PubMed Central

    Miller, Judith R

    2010-01-01

    When a neutral mutation arises in an invading population, it quickly either dies out or ‘surfs’, i.e. it comes to occupy almost all the habitat available at its time of origin. Beneficial mutations can also surf, as can deleterious mutations over finite time spans. We develop descriptive statistical models that quantify the relationship between the probability that a mutation will surf and demographic parameters for a cellular automaton model of surfing. We also provide a simple analytic model that performs well at predicting the probability of surfing for neutral and beneficial mutations in one dimension. The results suggest that factors – possibly including even abiotic factors – that promote invasion success may also increase the probability of surfing and associated adaptive genetic change, conditioned on such success. PMID:25567912

  19. Detachment and diffusive-convective transport in an evolving heterogeneous two-dimensional biofilm hybrid model.

    PubMed

    Luna, E; Domínguez-Zacarias, G; Ferreira, C Pio; Velasco-Hernandez, J X

    2004-12-01

    Under the hypothesis of correlation between biofilm survival and nutrient availability, by considering fluid drag forces and mortality due to nutrient depletion, a biofilm detachment/breaking condition is derived. The mechanisms leading to biofilm detachment/breaking are discussed. We construct and describe a hybrid model for a heterogeneous biofilm attached to walls in a channel where liquid is flowing. The model is called hybrid because it couples conservation equations with a cellular automaton. The biofilm layer is viewed as a porous medium with variable porosity, tortuosity, and permeability. The model is solved using asymptotic and finite differences methods. Results for porosity, nutrient distribution, and average surface location are presented. The model is capable of reproducing biofilm heterogeneity as well as the typical surface fingering (mushroomlike structure).

  20. Hybrid stochastic and deterministic simulations of calcium blips.

    PubMed

    Rüdiger, S; Shuai, J W; Huisinga, W; Nagaiah, C; Warnecke, G; Parker, I; Falcke, M

    2007-09-15

    Intracellular calcium release is a prime example for the role of stochastic effects in cellular systems. Recent models consist of deterministic reaction-diffusion equations coupled to stochastic transitions of calcium channels. The resulting dynamics is of multiple time and spatial scales, which complicates far-reaching computer simulations. In this article, we introduce a novel hybrid scheme that is especially tailored to accurately trace events with essential stochastic variations, while deterministic concentration variables are efficiently and accurately traced at the same time. We use finite elements to efficiently resolve the extreme spatial gradients of concentration variables close to a channel. We describe the algorithmic approach and we demonstrate its efficiency compared to conventional methods. Our single-channel model matches experimental data and results in intriguing dynamics if calcium is used as charge carrier. Random openings of the channel accumulate in bursts of calcium blips that may be central for the understanding of cellular calcium dynamics.

  1. Deterministic nonlinear phase gates induced by a single qubit

    NASA Astrophysics Data System (ADS)

    Park, Kimin; Marek, Petr; Filip, Radim

    2018-05-01

    We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.

  2. A living mesoscopic cellular automaton made of skin scales.

    PubMed

    Manukyan, Liana; Montandon, Sophie A; Fofonjka, Anamarija; Smirnov, Stanislav; Milinkovitch, Michel C

    2017-04-12

    In vertebrates, skin colour patterns emerge from nonlinear dynamical microscopic systems of cell interactions. Here we show that in ocellated lizards a quasi-hexagonal lattice of skin scales, rather than individual chromatophore cells, establishes a green and black labyrinthine pattern of skin colour. We analysed time series of lizard scale colour dynamics over four years of their development and demonstrate that this pattern is produced by a cellular automaton (a grid of elements whose states are iterated according to a set of rules based on the states of neighbouring elements) that dynamically computes the colour states of individual mesoscopic skin scales to produce the corresponding macroscopic colour pattern. Using numerical simulations and mathematical derivation, we identify how a discrete von Neumann cellular automaton emerges from a continuous Turing reaction-diffusion system. Skin thickness variation generated by three-dimensional morphogenesis of skin scales causes the underlying reaction-diffusion dynamics to separate into microscopic and mesoscopic spatial scales, the latter generating a cellular automaton. Our study indicates that cellular automata are not merely abstract computational systems, but can directly correspond to processes generated by biological evolution.

  3. A living mesoscopic cellular automaton made of skin scales

    NASA Astrophysics Data System (ADS)

    Manukyan, Liana; Montandon, Sophie A.; Fofonjka, Anamarija; Smirnov, Stanislav; Milinkovitch, Michel C.

    2017-04-01

    In vertebrates, skin colour patterns emerge from nonlinear dynamical microscopic systems of cell interactions. Here we show that in ocellated lizards a quasi-hexagonal lattice of skin scales, rather than individual chromatophore cells, establishes a green and black labyrinthine pattern of skin colour. We analysed time series of lizard scale colour dynamics over four years of their development and demonstrate that this pattern is produced by a cellular automaton (a grid of elements whose states are iterated according to a set of rules based on the states of neighbouring elements) that dynamically computes the colour states of individual mesoscopic skin scales to produce the corresponding macroscopic colour pattern. Using numerical simulations and mathematical derivation, we identify how a discrete von Neumann cellular automaton emerges from a continuous Turing reaction-diffusion system. Skin thickness variation generated by three-dimensional morphogenesis of skin scales causes the underlying reaction-diffusion dynamics to separate into microscopic and mesoscopic spatial scales, the latter generating a cellular automaton. Our study indicates that cellular automata are not merely abstract computational systems, but can directly correspond to processes generated by biological evolution.

  4. The B36/S125 "2x2" Life-Like Cellular Automaton

    NASA Astrophysics Data System (ADS)

    Johnston, Nathaniel

    The B36/S125 (or "2x2") cellular automaton is one that takes place on a 2D square lattice much like Conway's Game of Life. Although it exhibits high-level behaviour that is similar to Life, such as chaotic but eventually stable evolution and the existence of a natural diagonal glider, the individual objects that the rule contains generally look very different from their Life counterparts. In this article, a history of notable discoveries in the 2x2 rule is provided, and the fundamental patterns of the automaton are described. Some theoretical results are derived along the way, including a proof that the speed limits for diagonal and orthogonal spaceships in this rule are c/3 and c/2, respectively. A Margolus block cellular automaton that 2x2 emulates is investigated, and in particular a family of oscillators made up entirely of 2×2 blocks are analyzed and used to show that there exist oscillators with period 2ℓ(2k-1) for any integers k,ℓ≥1.

  5. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    PubMed

    Casey, M

    1996-08-15

    Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.

  6. A generalization of Fatou's lemma for extended real-valued functions on σ-finite measure spaces: with an application to infinite-horizon optimization in discrete time.

    PubMed

    Kamihigashi, Takashi

    2017-01-01

    Given a sequence [Formula: see text] of measurable functions on a σ -finite measure space such that the integral of each [Formula: see text] as well as that of [Formula: see text] exists in [Formula: see text], we provide a sufficient condition for the following inequality to hold: [Formula: see text] Our condition is considerably weaker than sufficient conditions known in the literature such as uniform integrability (in the case of a finite measure) and equi-integrability. As an application, we obtain a new result on the existence of an optimal path for deterministic infinite-horizon optimization problems in discrete time.

  7. A cellular automaton model of wildfire propagation and extinction

    Treesearch

    Keith C. Clarke; James A. Brass; Phillip J. Riggan

    1994-01-01

    We propose a new model to predict the spatial and temporal behavior of wildfires. Fire spread and intensity were simulated using a cellular automaton model. Monte Carlo techniques were used to provide fire risk probabilities for areas where fuel loadings and topography are known. The model assumes predetermined or measurable environmental variables such as wind...

  8. Correlated disorder in the Kuramoto model: Effects on phase coherence, finite-size scaling, and dynamic fluctuations.

    PubMed

    Hong, Hyunsuk; O'Keeffe, Kevin P; Strogatz, Steven H

    2016-10-01

    We consider a mean-field model of coupled phase oscillators with quenched disorder in the natural frequencies and coupling strengths. A fraction p of oscillators are positively coupled, attracting all others, while the remaining fraction 1-p are negatively coupled, repelling all others. The frequencies and couplings are deterministically chosen in a manner which correlates them, thereby correlating the two types of disorder in the model. We first explore the effect of this correlation on the system's phase coherence. We find that there is a critical width γ c in the frequency distribution below which the system spontaneously synchronizes. Moreover, this γ c is independent of p. Hence, our model and the traditional Kuramoto model (recovered when p = 1) have the same critical width γ c . We next explore the critical behavior of the system by examining the finite-size scaling and the dynamic fluctuation of the traditional order parameter. We find that the model belongs to the same universality class as the Kuramoto model with deterministically (not randomly) chosen natural frequencies for the case of p < 1.

  9. Study on Material Parameters Identification of Brain Tissue Considering Uncertainty of Friction Coefficient

    NASA Astrophysics Data System (ADS)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng

    2017-10-01

    Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.

  10. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  11. Effect of Solute Diffusion on Dendrite Growth in the Molten Pool of Al-Cu Alloy

    NASA Astrophysics Data System (ADS)

    Zhan, Xiaohong; Gu, Cheng; Liu, Yun; Wei, Yanhong

    2017-10-01

    A cellular automaton (CA)-finite difference model is developed to simulate dendrite growth and solute diffusion during solidification process in the molten pool of Al-Cu alloy. In order to explain the interaction between the dendritic growth and solute distribution, a series of CA simulations with different solute diffusion velocity coefficients are carried out. It is concluded that the solute concentration increases with dendrite growing and solute accumulation in the dendrite tip. Converged value of the dendrite tip growth velocity is about 480 μm/s if the mesh size is refined to 2 μm or less. Growth of the primary dendrite and the secondary dendrite is mainly influenced by solute diffusion at the dendrite tips. And growth of secondary and tertiary dendrites is mainly influenced by solute diffusion at interdendrite.

  12. The nature of turbulence in a triangular lattice gas automaton

    NASA Astrophysics Data System (ADS)

    Duong-Van, Minh; Feit, M. D.; Keller, P.; Pound, M.

    1986-12-01

    Power spectra calculated from the coarse-graining of a simple lattice gas automaton, and those of time averaging other stochastic times series that we have investigated, have exponents in the range -1.6 to -2, consistent with observation of fully developed turbulence. This power spectrum is a natural consequence of coarse-graining; the exponent -2 represents the continuum limit.

  13. Application of cellular automatons and ant algorithms in avionics

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Selvesiuk, N. I.; Platoshin, G. A.; Semenova, E. V.

    2018-03-01

    The paper considers two algorithms for searching quasi-optimal solutions of discrete optimization problems with regard to the tasks of avionics placing. The first one solves the problem of optimal placement of devices by installation locations, the second one is for the problem of finding the shortest route between devices. Solutions are constructed using a cellular automaton and the ant colony algorithm.

  14. Decision theory with resource-bounded agents.

    PubMed

    Halpern, Joseph Y; Pass, Rafael; Seeman, Lior

    2014-04-01

    There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein (), charges an agent for doing costly computation; the second, initiated by Neyman (), does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the "complexity" of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases. Copyright © 2014 Cognitive Science Society, Inc.

  15. Improvement of reliability in multi-interferometer-based counterfactual deterministic communication with dissipation compensation.

    PubMed

    Liu, Chao; Liu, Jinhong; Zhang, Junxiang; Zhu, Shiyao

    2018-02-05

    The direct counterfactual quantum communication (DCQC) is a surprising phenomenon that quantum information can be transmitted without using any carriers of physical particles. The nested interferometers are promising devices for realizing DCQC as long as the number of interferometers goes to be infinity. Considering the inevitable loss or dissipation in practical experimental interferometers, we analyze the dependence of reliability on the number of interferometers, and show that the reliability of direct communication is being rapidly degraded with the large number of interferometers. Furthermore, we simulate and test this counterfactual deterministic communication protocol with a finite number of interferometers, and demonstrate the improvement of the reliability using dissipation compensation in interferometers.

  16. MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes

    USGS Publications Warehouse

    Williams, B.K.

    1988-01-01

    Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

  17. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  18. Deterministic multi-step rotation of magnetic single-domain state in Nickel nanodisks using multiferroic magnetoelastic coupling

    NASA Astrophysics Data System (ADS)

    Sohn, Hyunmin; Liang, Cheng-yen; Nowakowski, Mark E.; Hwang, Yongha; Han, Seungoh; Bokor, Jeffrey; Carman, Gregory P.; Candler, Robert N.

    2017-10-01

    We demonstrate deterministic multi-step rotation of a magnetic single-domain (SD) state in Nickel nanodisks using the multiferroic magnetoelastic effect. Ferromagnetic Nickel nanodisks are fabricated on a piezoelectric Lead Zirconate Titanate (PZT) substrate, surrounded by patterned electrodes. With the application of a voltage between opposing electrode pairs, we generate anisotropic in-plane strains that reshape the magnetic energy landscape of the Nickel disks, reorienting magnetization toward a new easy axis. By applying a series of voltages sequentially to adjacent electrode pairs, circulating in-plane anisotropic strains are applied to the Nickel disks, deterministically rotating a SD state in the Nickel disks by increments of 45°. The rotation of the SD state is numerically predicted by a fully-coupled micromagnetic/elastodynamic finite element analysis (FEA) model, and the predictions are experimentally verified with magnetic force microscopy (MFM). This experimental result will provide a new pathway to develop energy efficient magnetic manipulation techniques at the nanoscale.

  19. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  20. Non-Deterministic Dynamic Instability of Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2004-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.

  1. An Automaton Analysis of the Learning of a Miniature System of Japanese. Psychology Series.

    ERIC Educational Resources Information Center

    Wexler, Kenneth Norman

    The purpose of the study reported here was to do an automata-theoretical and experimental investigation of the learning of the syntax and semantics of a second natural language. The main thrust of the work was to ask what kind of automaton a person can become. Various kinds of automata were considered, predictions were made from them, and these…

  2. New Tools for Hybrid Systems

    DTIC Science & Technology

    2007-05-02

    stability of a class of discrete event systems ", IEEE Transactions on Automatic Control , vol. 39, no. 2... stability , input/output stability , external stability and incremental input/output stability , as they apply to deterministic finite state machine systems ... class of systems , incremental 1/0 stability and external stability are equivalent notions, stronger than the notion of I/O stability . 15. SUBJECT

  3. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  4. Collective Traffic-like Movement of Ants on a Trail: Dynamical Phases and Phase Transitions

    NASA Astrophysics Data System (ADS)

    Kunwar, Ambarish; John, Alexander; Nishinari, Katsuhiro; Schadschneider, Andreas; Chowdhury, Debashish

    2004-11-01

    The traffic-like collective movement of ants on a trail can be described by a stochastic cellular automaton model. We have earlier investigated its unusual flow-density relation by using various mean field approximations and computer simulations. In this paper, we study the model following an alternative approach based on the analogy with the zero range process, which is one of the few known exactly solvable stochastic dynamical models. We show that our theory can quantitatively account for the unusual non-monotonic dependence of the average speed of the ants on their density for finite lattices with periodic boundary conditions. Moreover, we argue that the model exhibits a continuous phase transition at the critial density only in a limiting case. Furthermore, we investigate the phase diagram of the model by replacing the periodic boundary conditions by open boundary conditions.

  5. Influence of Secondary Cooling Mode on Solidification Structure and Macro-segregation Behavior for High-carbon Continuous Casting Bloom

    NASA Astrophysics Data System (ADS)

    Dou, Kun; Yang, Zhenguo; Liu, Qing; Huang, Yunhua; Dong, Hongbiao

    2017-07-01

    A cellular automaton-finite element coupling model for high-carbon continuously cast bloom of GCr15 steel is established to simulate the solidification structure and to investigate the influence of different secondary cooling modes on characteristic parameters such as equiaxed crystal ratio, grain size and secondary dendrite arm spacing, in which the effect of phase transformation and electromagnetic stirring is taken into consideration. On this basis, evolution of carbon macro-segregation for GCr15 steel bloom is researched correspondingly via industrial tests. Based on above analysis, the relationship among secondary cooling modes, characteristic parameters for solidification structure as well as carbon macro-segregation is illustrated to obtain optimum secondary cooling strategy and alleviate carbon macro-segregation degree for GCr15 steel bloom in continuous casting process. The evaluating method for element macro-segregation is applicable in various steel types.

  6. 3D CAFE modeling of grain structures: application to primary dendritic and secondary eutectic solidification

    NASA Astrophysics Data System (ADS)

    Carozzani, T.; Digonnet, H.; Gandin, Ch-A.

    2012-01-01

    A three-dimensional model is presented for the prediction of grain structures formed in casting. It is based on direct tracking of grain boundaries using a cellular automaton (CA) method. The model is fully coupled with a solution of the heat flow computed with a finite element (FE) method. Several unique capabilities are implemented including (i) the possibility to track the development of several types of grain structures, e.g. dendritic and eutectic grains, (ii) a coupling scheme that permits iterations between the FE method and the CA method, and (iii) tabulated enthalpy curves for the solid and liquid phases that offer the possibility to work with multicomponent alloys. The present CAFE model is also fully parallelized and runs on a cluster of computers. Demonstration is provided by direct comparison between simulated and recorded cooling curves for a directionally solidified aluminum-7 wt% silicon alloy.

  7. Material modeling of biofilm mechanical properties.

    PubMed

    Laspidou, C S; Spyrou, L A; Aravas, N; Rittmann, B E

    2014-05-01

    A biofilm material model and a procedure for numerical integration are developed in this article. They enable calculation of a composite Young's modulus that varies in the biofilm and evolves with deformation. The biofilm-material model makes it possible to introduce a modeling example, produced by the Unified Multi-Component Cellular Automaton model, into the general-purpose finite-element code ABAQUS. Compressive, tensile, and shear loads are imposed, and the way the biofilm mechanical properties evolve is assessed. Results show that the local values of Young's modulus increase under compressive loading, since compression results in the voids "closing," thus making the material stiffer. For the opposite reason, biofilm stiffness decreases when tensile loads are imposed. Furthermore, the biofilm is more compliant in shear than in compression or tension due to the how the elastic shear modulus relates to Young's modulus. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Three-Dimensional Finite Element Ablative Thermal Response and Thermostructural Design of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Braun, Robert D.

    2011-01-01

    A finite element ablation and thermal response program is presented for simulation of three-dimensional transient thermostructural analysis. The three-dimensional governing differential equations and finite element formulation are summarized. A novel probabilistic design methodology for thermal protection systems is presented. The design methodology is an eight step process beginning with a parameter sensitivity study and is followed by a deterministic analysis whereby an optimum design can determined. The design process concludes with a Monte Carlo simulation where the probabilities of exceeding design specifications are estimated. The design methodology is demonstrated by applying the methodology to the carbon phenolic compression pads of the Crew Exploration Vehicle. The maximum allowed values of bondline temperature and tensile stress are used as the design specifications in this study.

  9. Effect of Temperature and Fluid Flow on Dendrite Growth During Solidification of Al-3 Wt Pct Cu Alloy by the Two-Dimensional Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Gu, Cheng; Wei, Yanhong; Liu, Renpei; Yu, Fengyi

    2017-12-01

    A two-dimensional cellular automaton-finite volume model was developed to simulate dendrite growth of Al-3 wt pct Cu alloy during solidification to investigate the effect of temperature and fluid flow on dendrite morphology, solute concentration distribution, and dendrite growth velocity. Different calculation conditions that may influence the results of the simulation, including temperature and flow, were considered. The model was also employed to study the effect of different undercoolings, applied temperature fields, and forced flow velocities on solute segregation and dendrite growth. The initial temperature and fluid flow have a significant impact on the dendrite morphologies and solute profiles during solidification. The release of energy is operated with solidification and results in the increase of temperature. A larger undercooling leads to larger solute concentration near the solid/liquid interface and solute concentration gradient at the same time-step. Solute concentration in the solid region tends to increase with the increase of undercooling. Four vortexes appear under the condition when natural flow exists: the two on the right of the dendrite rotate clockwise, and those on the left of the dendrite rotate counterclockwise. With the increase of forced flow velocity, the rejected solute in the upstream region becomes easier to be washed away and enriched in the downstream region, resulting in acceleration of the growth of the dendrite in the upstream and inhibiting the downstream dendrite growth. The dendrite perpendicular to fluid flow shows a coarser morphology in the upstream region than that of the downstream. Almost no secondary dendrite appears during the calculation process.

  10. Fault Tolerant Optimal Control.

    DTIC Science & Technology

    1982-08-01

    subsystem is modelled by deterministic or stochastic finite-dimensional vector differential or difference equations. The parameters of these equations...is no partial differential equation that must be solved. Thus we can sidestep the inability to solve the Bellman equation for control problems with x...transition models and cost functionals can be reduced to the search for solutions of nonlinear partial differential equations using ’verification

  11. Monitoring with Data Automata

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.

  12. Unary probabilistic and quantum automata on promise problems

    NASA Astrophysics Data System (ADS)

    Gainutdinova, Aida; Yakaryılmaz, Abuzer

    2018-02-01

    We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error unary QFAs are more powerful than bounded-error unary PFAs, and, contrary to the binary language case, the computational power of Las-Vegas QFAs and bounded-error PFAs is equivalent to the computational power of deterministic finite automata (DFAs). Then, we present a new family of unary promise problems defined with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.

  13. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  14. Multi-Baker Map as a Model of Digital PD Control

    NASA Astrophysics Data System (ADS)

    Csernák, Gábor; Gyebrószki, Gergely; Stépán, Gábor

    Digital stabilization of unstable equilibria of linear systems may lead to small amplitude stochastic-like oscillations. We show that these vibrations can be related to a deterministic chaotic dynamics induced by sampling and quantization. A detailed analytical proof of chaos is presented for the case of a PD controlled oscillator: it is shown that there exists a finite attracting domain in the phase-space, the largest Lyapunov exponent is positive and the existence of a Smale horseshoe is also pointed out. The corresponding two-dimensional micro-chaos map is a multi-baker map, i.e. it consists of a finite series of baker’s maps.

  15. Multi Car Elevator Control by using Learning Automaton

    NASA Astrophysics Data System (ADS)

    Shiraishi, Kazuaki; Hamagami, Tomoki; Hirata, Hironori

    We study an adaptive control technique for multi car elevators (MCEs) by adopting learning automatons (LAs.) The MCE is a high performance and a near-future elevator system with multi shafts and multi cars. A strong point of the system is that realizing a large carrying capacity in small shaft area. However, since the operation is too complicated, realizing an efficient MCE control is difficult for top-down approaches. For example, “bunching up together" is one of the typical phenomenon in a simple traffic environment like the MCE. Furthermore, an adapting to varying environment in configuration requirement is a serious issue in a real elevator service. In order to resolve these issues, having an autonomous behavior is required to the control system of each car in MCE system, so that the learning automaton, as the solutions for this requirement, is supposed to be appropriate for the simple traffic control. First, we assign a stochastic automaton (SA) to each car control system. Then, each SA varies its stochastic behavior distributions for adapting to environment in which its policy is evaluated with each passenger waiting times. That is LA which learns the environment autonomously. Using the LA based control technique, the MCE operation efficiency is evaluated through simulation experiments. Results show the technique enables reducing waiting times efficiently, and we confirm the system can adapt to the dynamic environment.

  16. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  17. On the influence of additive and multiplicative noise on holes in dissipative systems.

    PubMed

    Descalzi, Orazio; Cartes, Carlos; Brand, Helmut R

    2017-05-01

    We investigate the influence of noise on deterministically stable holes in the cubic-quintic complex Ginzburg-Landau equation. Inspired by experimental possibilities, we specifically study two types of noise: additive noise delta-correlated in space and spatially homogeneous multiplicative noise on the formation of π-holes and 2π-holes. Our results include the following main features. For large enough additive noise, we always find a transition to the noisy version of the spatially homogeneous finite amplitude solution, while for sufficiently large multiplicative noise, a collapse occurs to the zero amplitude solution. The latter type of behavior, while unexpected deterministically, can be traced back to a characteristic feature of multiplicative noise; the zero solution acts as the analogue of an absorbing boundary: once trapped at zero, the system cannot escape. For 2π-holes, which exist deterministically over a fairly small range of values of subcriticality, one can induce a transition to a π-hole (for additive noise) or to a noise-sustained pulse (for multiplicative noise). This observation opens the possibility of noise-induced switching back and forth from and to 2π-holes.

  18. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  19. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  20. Specialized Silicon Compilers for Language Recognition.

    DTIC Science & Technology

    1984-07-01

    realizations of non-deterministic automata have been reported that solve these problems in diffierent ways. Floyd and Ullman [ 281 have presented a...in Applied Mathematics, pages 19-31. American Mathematical Society, 1967. [ 281 Floyd, R. W. and J. D. Ullman. The Compilation of Regular Expressions...Shannon (editor). Automata Studies, chapter 1, pages 3-41. Princeton University Press, Princeton. N. J., 1956. [44] Kohavi, Zwi . Switching and Finite

  1. Comparison of two optimization algorithms for fuzzy finite element model updating for damage detection in a wind turbine blade

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2018-03-01

    vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.

  2. Forward and Inverse Modeling of Self-potential. A Tomography of Groundwater Flow and Comparison Between Deterministic and Stochastic Inversion Methods

    NASA Astrophysics Data System (ADS)

    Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.

    2016-12-01

    Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

  3. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  4. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  5. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  6. Life's Still Lifes

    NASA Astrophysics Data System (ADS)

    McIntosh, Harold V.

    The de Bruijn diagram describing those decompositions of the neighborhoods of a one dimensional cellular automaton which conform to predetermined requirements of periodicity and translational symmetry shows how to construct extended configurations satisfying the same requirements. Similar diagrams, formed by stages, describe higher dimensional automata, although they become more laborious to compute with increasing neighborhood size. The procedure is illustrated by computing some still lifes for Conway's game of Life, a widely known two dimensional cellular automaton. This paper is written in September 10, 1988.

  7. A cellular automaton for the signed particle formulation of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Sellier, J. M.; Kapanova, K. G.; Dimov, I.

    2017-02-01

    Recently, a new formulation of quantum mechanics, based on the concept of signed particles, has been suggested. In this paper, we introduce a cellular automaton which mimics the dynamics of quantum objects in the phase-space in a time-dependent fashion. This is twofold: it provides a simplified and accessible language to non-physicists who wants to simulate quantum mechanical systems, at the same time it enables a different way to explore the laws of Physics. Moreover, it opens the way towards hybrid simulations of quantum systems by combining full quantum models with cellular automata when the former fail. In order to show the validity of the suggested cellular automaton and its combination with the signed particle formalism, several numerical experiments are performed, showing very promising results. Being this article a preliminary study on quantum simulations in phase-space by means of cellular automata, some conclusions are drawn about the encouraging results obtained so far and the possible future developments.

  8. Wavefront cellular learning automata.

    PubMed

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2018-02-01

    This paper proposes a new cellular learning automaton, called a wavefront cellular learning automaton (WCLA). The proposed WCLA has a set of learning automata mapped to a connected structure and uses this structure to propagate the state changes of the learning automata over the structure using waves. In the WCLA, after one learning automaton chooses its action, if this chosen action is different from the previous action, it can send a wave to its neighbors and activate them. Each neighbor receiving the wave is activated and must choose a new action. This structure for the WCLA is necessary in many dynamic areas such as social networks, computer networks, grid computing, and web mining. In this paper, we introduce the WCLA framework as an optimization tool with diffusion capability, study its behavior over time using ordinary differential equation solutions, and present its accuracy using expediency analysis. To show the superiority of the proposed WCLA, we compare the proposed method with some other types of cellular learning automata using two benchmark problems.

  9. Universal microfluidic automaton for autonomous sample processing: application to the Mars Organic Analyzer.

    PubMed

    Kim, Jungkyu; Jensen, Erik C; Stockton, Amanda M; Mathies, Richard A

    2013-08-20

    A fully integrated multilayer microfluidic chemical analyzer for automated sample processing and labeling, as well as analysis using capillary zone electrophoresis is developed and characterized. Using lifting gate microfluidic control valve technology, a microfluidic automaton consisting of a two-dimensional microvalve cellular array is fabricated with soft lithography in a format that enables facile integration with a microfluidic capillary electrophoresis device. The programmable sample processor performs precise mixing, metering, and routing operations that can be combined to achieve automation of complex and diverse assay protocols. Sample labeling protocols for amino acid, aldehyde/ketone and carboxylic acid analysis are performed automatically followed by automated transfer and analysis by the integrated microfluidic capillary electrophoresis chip. Equivalent performance to off-chip sample processing is demonstrated for each compound class; the automated analysis resulted in a limit of detection of ~16 nM for amino acids. Our microfluidic automaton provides a fully automated, portable microfluidic analysis system capable of autonomous analysis of diverse compound classes in challenging environments.

  10. Wavefront cellular learning automata

    NASA Astrophysics Data System (ADS)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2018-02-01

    This paper proposes a new cellular learning automaton, called a wavefront cellular learning automaton (WCLA). The proposed WCLA has a set of learning automata mapped to a connected structure and uses this structure to propagate the state changes of the learning automata over the structure using waves. In the WCLA, after one learning automaton chooses its action, if this chosen action is different from the previous action, it can send a wave to its neighbors and activate them. Each neighbor receiving the wave is activated and must choose a new action. This structure for the WCLA is necessary in many dynamic areas such as social networks, computer networks, grid computing, and web mining. In this paper, we introduce the WCLA framework as an optimization tool with diffusion capability, study its behavior over time using ordinary differential equation solutions, and present its accuracy using expediency analysis. To show the superiority of the proposed WCLA, we compare the proposed method with some other types of cellular learning automata using two benchmark problems.

  11. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  12. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  13. Stochastic evolution in populations of ideas

    PubMed Central

    Nicole, Robin; Sollich, Peter; Galla, Tobias

    2017-01-01

    It is known that learning of players who interact in a repeated game can be interpreted as an evolutionary process in a population of ideas. These analogies have so far mostly been established in deterministic models, and memory loss in learning has been seen to act similarly to mutation in evolution. We here propose a representation of reinforcement learning as a stochastic process in finite ‘populations of ideas’. The resulting birth-death dynamics has absorbing states and allows for the extinction or fixation of ideas, marking a key difference to mutation-selection processes in finite populations. We characterize the outcome of evolution in populations of ideas for several classes of symmetric and asymmetric games. PMID:28098244

  14. Lattice Truss Structural Response Using Energy Methods

    NASA Technical Reports Server (NTRS)

    Kenner, Winfred Scottson

    1996-01-01

    A deterministic methodology is presented for developing closed-form deflection equations for two-dimensional and three-dimensional lattice structures. Four types of lattice structures are studied: beams, plates, shells and soft lattices. Castigliano's second theorem, which entails the total strain energy of a structure, is utilized to generate highly accurate results. Derived deflection equations provide new insight into the bending and shear behavior of the four types of lattices, in contrast to classic solutions of similar structures. Lattice derivations utilizing kinetic energy are also presented, and used to examine the free vibration response of simple lattice structures. Derivations utilizing finite element theory for unique lattice behavior are also presented and validated using the finite element analysis code EAL.

  15. Stochastic evolution in populations of ideas

    NASA Astrophysics Data System (ADS)

    Nicole, Robin; Sollich, Peter; Galla, Tobias

    2017-01-01

    It is known that learning of players who interact in a repeated game can be interpreted as an evolutionary process in a population of ideas. These analogies have so far mostly been established in deterministic models, and memory loss in learning has been seen to act similarly to mutation in evolution. We here propose a representation of reinforcement learning as a stochastic process in finite ‘populations of ideas’. The resulting birth-death dynamics has absorbing states and allows for the extinction or fixation of ideas, marking a key difference to mutation-selection processes in finite populations. We characterize the outcome of evolution in populations of ideas for several classes of symmetric and asymmetric games.

  16. Research on an augmented Lagrangian penalty function algorithm for nonlinear programming

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1978-01-01

    The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.

  17. On the Nature of Design

    NASA Astrophysics Data System (ADS)

    Valverde, Sergi; Solé, Ricard V.

    At the attention of scientist, philosophers and layman alike. It was so extraordinary in fact that even today we are fascinated by it and by the no less uncommon people who got involved. The subject of this story was an amazing machine, more precisely an automaton. Known as the Turk, it was a mechanical chess player, made of wood and dressed in a Turkish-like costume (see Fig. 1). It played chess with Napoleon, inspired Charles Babbage and moved the great Edgar Allan Poe to write a critical essay about the nature of the automaton [1].

  18. Scaling properties of a rice-pile model: inertia and friction effects.

    PubMed

    Khfifi, M; Loulidi, M

    2008-11-01

    We present a rice-pile cellular automaton model that includes inertial and friction effects. This model is studied in one dimension, where the updating of metastable sites is done according to a stochastic dynamics governed by a probabilistic toppling parameter p that depends on the accumulated energy of moving grains. We investigate the scaling properties of the model using finite-size scaling analysis. The avalanche size, the lifetime, and the residence time distributions exhibit a power-law behavior. Their corresponding critical exponents, respectively, tau, y, and yr, are not universal. They present continuous variation versus the parameters of the system. The maximal value of the critical exponent tau that our model gives is very close to the experimental one, tau=2.02 [Frette, Nature (London) 379, 49 (1996)], and the probability distribution of the residence time is in good agreement with the experimental results. We note that the critical behavior is observed only in a certain range of parameter values of the system which correspond to low inertia and high friction.

  19. Simulation of Channel Segregation During Directional Solidification of In—75 wt pct Ga. Qualitative Comparison with In Situ Observations

    NASA Astrophysics Data System (ADS)

    Saad, Ali; Gandin, Charles-André; Bellet, Michel; Shevchenko, Natalia; Eckert, Sven

    2015-11-01

    Freckles are common defects in industrial casting. They result from thermosolutal convection due to buoyancy forces generated from density variations in the liquid. The present paper proposes a numerical analysis for the formation of channel segregation using the three-dimensional (3D) cellular automaton (CA)—finite element (FE) model. The model integrates kinetics laws for the nucleation and growth of a microstructure with the solution of the conservation equations for the casting, while introducing an intermediate modeling scale for a direct representation of the envelope of the dendritic grains. Directional solidification of a cuboid cell is studied. Its geometry, the alloy chosen as well as the process parameters are inspired from experimental observations recently reported in the literature. Snapshots of the convective pattern, the solute distribution, and the morphology of the growth front are qualitatively compared. Similitudes are found when considering the coupled 3D CAFE simulations. Limitations of the model to reach direct simulation of the experiments are discussed.

  20. Microstructure simulation of rapidly solidified ASP30 high-speed steel particles by gas atomization

    NASA Astrophysics Data System (ADS)

    Ma, Jie; Wang, Bo; Yang, Zhi-liang; Wu, Guang-xin; Zhang, Jie-yu; Zhao, Shun-li

    2016-03-01

    In this study, the microstructure evolution of rapidly solidified ASP30 high-speed steel particles was predicted using a simulation method based on the cellular automaton-finite element (CAFE) model. The dendritic growth kinetics, in view of the characteristics of ASP30 steel, were calculated and combined with macro heat transfer calculations by user-defined functions (UDFs) to simulate the microstructure of gas-atomized particles. The relationship among particle diameter, undercooling, and the convection heat transfer coefficient was also investigated to provide cooling conditions for simulations. The simulated results indicated that a columnar grain microstructure was observed in small particles, whereas an equiaxed microstructure was observed in large particles. In addition, the morphologies and microstructures of gas-atomized ASP30 steel particles were also investigated experimentally using scanning electron microscopy (SEM). The experimental results showed that four major types of microstructures were formed: dendritic, equiaxed, mixed, and multi-droplet microstructures. The simulated results and the available experimental data are in good agreement.

  1. Production of Supra-regular Spatial Sequences by Macaque Monkeys.

    PubMed

    Jiang, Xinjian; Long, Tenghai; Cao, Weicong; Li, Junru; Dehaene, Stanislas; Wang, Liping

    2018-06-18

    Understanding and producing embedded sequences in language, music, or mathematics, is a central characteristic of our species. These domains are hypothesized to involve a human-specific competence for supra-regular grammars, which can generate embedded sequences that go beyond the regular sequences engendered by finite-state automata. However, is this capacity truly unique to humans? Using a production task, we show that macaque monkeys can be trained to produce time-symmetrical embedded spatial sequences whose formal description requires supra-regular grammars or, equivalently, a push-down stack automaton. Monkeys spontaneously generalized the learned grammar to novel sequences, including longer ones, and could generate hierarchical sequences formed by an embedding of two levels of abstract rules. Compared to monkeys, however, preschool children learned the grammars much faster using a chunking strategy. While supra-regular grammars are accessible to nonhuman primates through extensive training, human uniqueness may lie in the speed and learning strategy with which they are acquired. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  3. The modelling of the flow-induced vibrations of periodic flat and axial-symmetric structures with a wave-based method

    NASA Astrophysics Data System (ADS)

    Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.

    2018-06-01

    The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.

  4. An epidemiological modeling and data integration framework.

    PubMed

    Pfeifer, B; Wurz, M; Hanser, F; Seger, M; Netzer, M; Osl, M; Modre-Osprian, R; Schreier, G; Baumgartner, C

    2010-01-01

    In this work, a cellular automaton software package for simulating different infectious diseases, storing the simulation results in a data warehouse system and analyzing the obtained results to generate prediction models as well as contingency plans, is proposed. The Brisbane H3N2 flu virus, which has been spreading during the winter season 2009, was used for simulation in the federal state of Tyrol, Austria. The simulation-modeling framework consists of an underlying cellular automaton. The cellular automaton model is parameterized by known disease parameters and geographical as well as demographical conditions are included for simulating the spreading. The data generated by simulation are stored in the back room of the data warehouse using the Talend Open Studio software package, and subsequent statistical and data mining tasks are performed using the tool, termed Knowledge Discovery in Database Designer (KD3). The obtained simulation results were used for generating prediction models for all nine federal states of Austria. The proposed framework provides a powerful and easy to handle interface for parameterizing and simulating different infectious diseases in order to generate prediction models and improve contingency plans for future events.

  5. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  6. Calculating the Malliavin derivative of some stochastic mechanics problems

    PubMed Central

    Hauseux, Paul; Hale, Jack S.

    2017-01-01

    The Malliavin calculus is an extension of the classical calculus of variations from deterministic functions to stochastic processes. In this paper we aim to show in a practical and didactic way how to calculate the Malliavin derivative, the derivative of the expectation of a quantity of interest of a model with respect to its underlying stochastic parameters, for four problems found in mechanics. The non-intrusive approach uses the Malliavin Weight Sampling (MWS) method in conjunction with a standard Monte Carlo method. The models are expressed as ODEs or PDEs and discretised using the finite difference or finite element methods. Specifically, we consider stochastic extensions of; a 1D Kelvin-Voigt viscoelastic model discretised with finite differences, a 1D linear elastic bar, a hyperelastic bar undergoing buckling, and incompressible Navier-Stokes flow around a cylinder, all discretised with finite elements. A further contribution of this paper is an extension of the MWS method to the more difficult case of non-Gaussian random variables and the calculation of second-order derivatives. We provide open-source code for the numerical examples in this paper. PMID:29261776

  7. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  8. An alternative approach to measure similarity between two deterministic transient signals

    NASA Astrophysics Data System (ADS)

    Shin, Kihong

    2016-06-01

    In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.

  9. Interactive Reliability Model for Whisker-toughened Ceramics

    NASA Technical Reports Server (NTRS)

    Palko, Joseph L.

    1993-01-01

    Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.

  10. Pest persistence and eradication conditions in a deterministic model for sterile insect release.

    PubMed

    Gordillo, Luis F

    2015-01-01

    The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.

  11. Tapered fiber coupling of single photons emitted by a deterministically positioned single nitrogen vacancy center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebermeister, Lars, E-mail: lars.liebermeister@physik.uni-muenchen.de; Petersen, Fabian; Münchow, Asmus v.

    2014-01-20

    A diamond nano-crystal hosting a single nitrogen vacancy (NV) center is optically selected with a confocal scanning microscope and positioned deterministically onto the subwavelength-diameter waist of a tapered optical fiber (TOF) with the help of an atomic force microscope. Based on this nano-manipulation technique, we experimentally demonstrate the evanescent coupling of single fluorescence photons emitted by a single NV-center to the guided mode of the TOF. By comparing photon count rates of the fiber-guided and the free-space modes and with the help of numerical finite-difference time domain simulations, we determine a lower and upper bound for the coupling efficiency ofmore » (9.5 ± 0.6)% and (10.4 ± 0.7)%, respectively. Our results are a promising starting point for future integration of single photon sources into photonic quantum networks and applications in quantum information science.« less

  12. Periodical cicadas: A minimal automaton model

    NASA Astrophysics Data System (ADS)

    de O. Cardozo, Giovano; de A. M. M. Silvestre, Daniel; Colato, Alexandre

    2007-08-01

    The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles.

  13. Simulating pedestrian flow by an improved two-process cellular automaton model

    NASA Astrophysics Data System (ADS)

    Jin, Cheng-Jie; Wang, Wei; Jiang, Rui; Dong, Li-Yun

    In this paper, we study the pedestrian flow with an Improved Two-Process (ITP) cellular automaton model, which is originally proposed by Blue and Adler. Simulations of pedestrian counterflow have been conducted, under both periodic and open boundary conditions. The lane formation phenomenon has been reproduced without using the place exchange rule. We also present and discuss the flow-density and velocity-density relationships of both uni-directional flow and counterflow. By the comparison with the Blue-Adler model, we find the ITP model has higher values of maximum flow, critical density and completely jammed density under different conditions.

  14. Automation Rover for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  15. Simulation and analysis of traffic flow based on cellular automaton

    NASA Astrophysics Data System (ADS)

    Ren, Xianping; Liu, Xia

    2018-03-01

    In this paper, single-lane and two-lane traffic model are established based on cellular automaton. Different values of vehicle arrival rate at the entrance and vehicle departure rate at the exit are set to analyze their effects on density, average speed and traffic flow. If the road exit is unblocked, vehicles can pass through the road smoothly despite of the arrival rate at the entrance. If vehicles enter into the road continuously, the traffic condition is varied with the departure rate at the exit. To avoid traffic jam, reasonable vehicle departure rate should be adopted.

  16. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: A Realistic Cellular Automaton Model for Synchronized Traffic Flow

    NASA Astrophysics Data System (ADS)

    Zhao, Bo-Han; Hu, Mao-Bin; Jiang, Rui; Wu, Qing-Song

    2009-11-01

    A cellular automaton model is proposed to consider the anticipation effect in drivers' behavior. It is shown that the anticipation effect can be one of the origins of synchronized traffic flow. With anticipation effect, the congested traffic flow simulated by the model exhibits the features of synchronized flow. The spatiotemporal patterns induced by an on-ramp are also consistent with the three-phase traffic theory. Since the origin of synchronized flow is still controversial, our work can shed some light on the mechanism of synchronized flow.

  17. Observer design for compensation of network-induced delays in integrated communication and control systems

    NASA Technical Reports Server (NTRS)

    Luck, R.; Ray, A.

    1988-01-01

    A method for compensating the effects of network-induced delays in integrated communication and control systems (ICCS) is proposed, and a finite-dimensional time-invariant ICCS model is developed. The problem of analyzing systems with time-varying and stochastic delays is circumvented by the application of a deterministic observer. For the case of controller-to-actuator delays, the observed design must rely on an extended model which represents the delays as additional states.

  18. Deterministic representation of chaos with application to turbulence

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1987-01-01

    Chaotic motions of nonlinear dynamical systems are decomposed into mean components and fluctuations. The approach is based upon the concept that the fluctuations driven by the instability of the original (unperturbed) motion grow until a new stable state is approached. The Reynolds-type equations written for continuous as well as for finite-degrees-of-freedom dynamical systems are closed by using this stabilization principle. The theory is applied to conservative systems, to strange attractors and to turbulent motions.

  19. Optimal Alignment of Structures for Finite and Periodic Systems.

    PubMed

    Griffiths, Matthew; Niblett, Samuel P; Wales, David J

    2017-10-10

    Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.

  20. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    NASA Astrophysics Data System (ADS)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  1. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, R; Fallone, B; Cross Cancer Institute, Edmonton, AB

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. Themore » LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been formulated on GPU, resulting in two orders of magnitude speedup. Funding support from Natural Sciences and Engineering Research Council and Alberta Innovates Health Solutions. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less

  2. Driven Langevin systems: fluctuation theorems and faithful dynamics

    NASA Astrophysics Data System (ADS)

    Sivak, David; Chodera, John; Crooks, Gavin

    2014-03-01

    Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  3. Vibration study of a vehicle suspension assembly with the finite element method

    NASA Astrophysics Data System (ADS)

    Cătălin Marinescu, Gabriel; Castravete, Ştefan-Cristian; Dumitru, Nicolae

    2017-10-01

    The main steps of the present work represent a methodology of analysing various vibration effects over suspension mechanical parts of a vehicle. A McPherson type suspension from an existing vehicle was created using CAD software. Using the CAD model as input, a finite element model of the suspension assembly was developed. Abaqus finite element analysis software was used to pre-process, solve, and post-process the results. Geometric nonlinearities are included in the model. Severe sources of nonlinearities such us friction and contact are also included in the model. The McPherson spring is modelled as linear spring. The analysis include several steps: preload, modal analysis, the reduction of the model to 200 generalized coordinates, a deterministic external excitation, a random excitation that comes from different types of roads. The vibration data used as an input for the simulation were previously obtained by experimental means. Mathematical expressions used for the simulation were also presented in the paper.

  4. Topological bifurcations in a model society of reasonable contrarians

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Rechtman, Raúl

    2013-12-01

    People are often divided into conformists and contrarians, the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority. In practice, however, the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion, which denotes a social norm. Such reasonable contrarian behavior is often considered a mark of independent thought and can be a useful strategy in financial markets. We present the opinion dynamics of a society of reasonable contrarian agents. The model is a cellular automaton of Ising type, with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms. We introduce the entropy of the collective variable as a way of comparing deterministic (mean-field) and probabilistic (simulations) bifurcation diagrams. In the mean-field approximation the model exhibits bifurcations and a chaotic phase, interpreted as coherent oscillations of the whole society. However, in a one-dimensional spatial arrangement one observes incoherent oscillations and a constant average. In simulations on Watts-Strogatz networks with a small-world effect the mean-field behavior is recovered, with a bifurcation diagram that resembles the mean-field one but where the rewiring probability is used as the control parameter. Similar bifurcation diagrams are found for scale-free networks, and we are able to compute an effective connectivity for such networks.

  5. Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-04-01

    A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.

  6. Topological bifurcations in a model society of reasonable contrarians.

    PubMed

    Bagnoli, Franco; Rechtman, Raúl

    2013-12-01

    People are often divided into conformists and contrarians, the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority. In practice, however, the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion, which denotes a social norm. Such reasonable contrarian behavior is often considered a mark of independent thought and can be a useful strategy in financial markets. We present the opinion dynamics of a society of reasonable contrarian agents. The model is a cellular automaton of Ising type, with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms. We introduce the entropy of the collective variable as a way of comparing deterministic (mean-field) and probabilistic (simulations) bifurcation diagrams. In the mean-field approximation the model exhibits bifurcations and a chaotic phase, interpreted as coherent oscillations of the whole society. However, in a one-dimensional spatial arrangement one observes incoherent oscillations and a constant average. In simulations on Watts-Strogatz networks with a small-world effect the mean-field behavior is recovered, with a bifurcation diagram that resembles the mean-field one but where the rewiring probability is used as the control parameter. Similar bifurcation diagrams are found for scale-free networks, and we are able to compute an effective connectivity for such networks.

  7. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  8. Buyer-vendor coordination for fixed lifetime product with quantity discount under finite production rate

    NASA Astrophysics Data System (ADS)

    Zhang, Qinghong; Luo, Jianwen; Duan, Yongrui

    2016-03-01

    Buyer-vendor coordination has been widely addressed; however, the fixed lifetime of the product is seldom considered. In this paper, we study the coordination of an integrated production-inventory system with quantity discount for a fixed lifetime product under finite production rate and deterministic demand. We first derive the buyer's ordering policy and the vendor's production batch size in decentralised and centralised systems. We then compare the two systems and show the non-coordination of the ordering policies and the production batch sizes. To improve the supply chain efficiency, we propose quantity discount contract and prove that the contract can coordinate the buyer-vendor supply chain. Finally, we present analytically tractable solutions and give a numerical example to illustrate the benefits of the proposed quantity discount strategy.

  9. GENERAL: A modified weighted probabilistic cellular automaton traffic flow model

    NASA Astrophysics Data System (ADS)

    Zhuang, Qian; Jia, Bin; Li, Xin-Gang

    2009-08-01

    This paper modifies the weighted probabilistic cellular automaton model (Li X L, Kuang H, Song T, et al 2008 Chin. Phys. B 17 2366) which considered a diversity of traffic behaviors under real traffic situations induced by various driving characters and habits. In the new model, the effects of the velocity at the last time step and drivers' desire for acceleration are taken into account. The fundamental diagram, spatial-temporal diagram, and the time series of one-minute data are analyzed. The results show that this model reproduces synchronized flow. Finally, it simulates the on-ramp system with the proposed model. Some characteristics including the phase diagram are studied.

  10. Wolfram's class IV automata and a good life

    NASA Astrophysics Data System (ADS)

    McIntosh, Harold V.

    1990-09-01

    A comprehensive discussion of Wolfram's four classes of cellular automata is given, with the intention of relating them to Conway's criteria for a good game of Life. Although it is known that such classifications cannot be entirely rigorous, much information about the behavior of an automaton can be gleaned from the statistical properties of its transition table. Still more information can be deduced from the mean field approximation to its state densities, in particular, from the distribution of horizontal and diagonal tangents of the latter. In turn these characteristics can be related to the presence or absence of certain loops in the de Bruijn diagram of the automaton.

  11. An implementation of cellular automaton model for single-line train working diagram

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Liu, Jun

    2006-04-01

    According to the railway transportation system's characteristics, a new cellular automaton model for the single-line railway system is presented in this paper. Based on this model, several simulations were done to imitate the train operation under three working diagrams. From a different angle the results show how the organization of train operation impacts on the railway carrying capacity. By using the non-parallel train working diagram the influence of fast-train on slow-train is found to be the strongest. Many slow-trains have to wait in-between neighbouring stations to let the fast-train(s) pass through first. So the slow-train will advance like a wave propagating from the departure station to the arrival station. This also resembles the situation of a highway jammed traffic flow. Furthermore, the nonuniformity of travel times between the sections also greatly limits the railway carrying capacity. After converting the nonuniform sections into the sections with uniform travel times while the total travel time is kept unchanged, all three carrying capacities are improved greatly as shown by simulation. It also shows that the cellular automaton model is an effective and feasible way to investigate the railway transportation system.

  12. Training a molecular automaton to play a game

    NASA Astrophysics Data System (ADS)

    Pei, Renjun; Matamoros, Elizabeth; Liu, Manhong; Stefanovic, Darko; Stojanovic, Milan N.

    2010-11-01

    Research at the interface between chemistry and cybernetics has led to reports of `programmable molecules', but what does it mean to say `we programmed a set of solution-phase molecules to do X'? A survey of recently implemented solution-phase circuitry indicates that this statement could be replaced with `we pre-mixed a set of molecules to do X and functional subsets of X'. These hard-wired mixtures are then exposed to a set of molecular inputs, which can be interpreted as being keyed to human moves in a game, or as assertions of logical propositions. In nucleic acids-based systems, stemming from DNA computation, these inputs can be seen as generic oligonucleotides. Here, we report using reconfigurable nucleic acid catalyst-based units to build a multipurpose reprogrammable molecular automaton that goes beyond single-purpose `hard-wired' molecular automata. The automaton covers all possible responses to two consecutive sets of four inputs (such as four first and four second moves for a generic set of trivial two-player two-move games). This is a model system for more general molecular field programmable gate array (FPGA)-like devices that can be programmed by example, which means that the operator need not have any knowledge of molecular computing methods.

  13. Training a molecular automaton to play a game.

    PubMed

    Pei, Renjun; Matamoros, Elizabeth; Liu, Manhong; Stefanovic, Darko; Stojanovic, Milan N

    2010-11-01

    Research at the interface between chemistry and cybernetics has led to reports of 'programmable molecules', but what does it mean to say 'we programmed a set of solution-phase molecules to do X'? A survey of recently implemented solution-phase circuitry indicates that this statement could be replaced with 'we pre-mixed a set of molecules to do X and functional subsets of X'. These hard-wired mixtures are then exposed to a set of molecular inputs, which can be interpreted as being keyed to human moves in a game, or as assertions of logical propositions. In nucleic acids-based systems, stemming from DNA computation, these inputs can be seen as generic oligonucleotides. Here, we report using reconfigurable nucleic acid catalyst-based units to build a multipurpose reprogrammable molecular automaton that goes beyond single-purpose 'hard-wired' molecular automata. The automaton covers all possible responses to two consecutive sets of four inputs (such as four first and four second moves for a generic set of trivial two-player two-move games). This is a model system for more general molecular field programmable gate array (FPGA)-like devices that can be programmed by example, which means that the operator need not have any knowledge of molecular computing methods.

  14. Probabilistic Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  15. Phase ordering in disordered and inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Corberi, Federico; Zannetti, Marco; Lippiello, Eugenio; Burioni, Raffaella; Vezzani, Alessandro

    2015-06-01

    We study numerically the coarsening dynamics of the Ising model on a regular lattice with random bonds and on deterministic fractal substrates. We propose a unifying interpretation of the phase-ordering processes based on two classes of dynamical behaviors characterized by different growth laws of the ordered domain size, namely logarithmic or power law, respectively. It is conjectured that the interplay between these dynamical classes is regulated by the same topological feature that governs the presence or the absence of a finite-temperature phase transition.

  16. Implementation of a polling protocol for predicting celiac disease in videocapsule analysis.

    PubMed

    Ciaccio, Edward J; Tennyson, Christina A; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2013-07-16

    To investigate the presence of small intestinal villous atrophy in celiac disease patients from quantitative analysis of videocapsule image sequences. Nine celiac patient data with biopsy-proven villous atrophy and seven control patient data lacking villous atrophy were used for analysis. Celiacs had biopsy-proven disease with scores of Marsh II-IIIC except in the case of one hemophiliac patient. At four small intestinal levels (duodenal bulb, distal duodenum, jejunum, and ileum), video clips of length 200 frames (100 s) were analyzed. Twenty-four measurements were used for image characterization. These measurements were determined by quantitatively processing the videocapsule images via techniques for texture analysis, motility estimation, volumetric reconstruction using shape-from-shading principles, and image transformation. Each automated measurement method, or automaton, was polled as to whether or not villous atrophy was present in the small intestine, indicating celiac disease. Each automaton's vote was determined based upon an optimized parameter threshold level, with the threshold levels being determined from prior data. A prediction of villous atrophy was made if it received the majority of votes (≥ 13), while no prediction was made for tie votes (12-12). Thus each set of images was classified as being from either a celiac disease patient or from a control patient. Separated by intestinal level, the overall sensitivity of automata polling for predicting villous atrophy and hence celiac disease was 83.9%, while the specificity was 92.9%, and the overall accuracy of automata-based polling was 88.1%. The method of image transformation yielded the highest sensitivity at 93.8%, while the method of texture analysis using subbands had the highest specificity at 76.0%. Similar results of prediction were observed at all four small intestinal locations, but there were more tie votes at location 4 (ileum). Incorrect prediction which reduced sensitivity occurred for two celiac patients with Marsh type II pattern, which is characterized by crypt hyperplasia, but normal villous architecture. Pooled from all levels, there was a mean of 14.31 ± 3.28 automaton votes for celiac vs 9.67 ± 3.31 automaton votes for control when celiac patient data was analyzed (P < 0.001). Pooled from all levels, there was a mean of 9.71 ± 2.8128 automaton votes for celiac vs 14.32 ± 2.7931 automaton votes for control when control patient data was analyzed (P < 0.001). Automata-based polling may be useful to indicate presence of mucosal atrophy, indicative of celiac disease, across the entire small bowel, though this must be confirmed in a larger patient set. Since the method is quantitative and automated, it can potentially eliminate observer bias and enable the detection of subtle abnormality in patients lacking a clear diagnosis. Our paradigm was found to be more efficacious at proximal small intestinal locations, which may suggest a greater presence and severity of villous atrophy at proximal as compared with distal locations.

  17. Bounding the first exit from the basin: Independence times and finite-time basin stability

    NASA Astrophysics Data System (ADS)

    Schultz, Paul; Hellmann, Frank; Webster, Kevin N.; Kurths, Jürgen

    2018-04-01

    We study the stability of deterministic systems, given sequences of large, jump-like perturbations. Our main result is the derivation of a lower bound for the probability of the system to remain in the basin, given that perturbations are rare enough. This bound is efficient to evaluate numerically. To quantify rare enough, we define the notion of the independence time of such a system. This is the time after which a perturbed state has probably returned close to the attractor, meaning that subsequent perturbations can be considered separately. The effect of jump-like perturbations that occur at least the independence time apart is thus well described by a fixed probability to exit the basin at each jump, allowing us to obtain the bound. To determine the independence time, we introduce the concept of finite-time basin stability, which corresponds to the probability that a perturbed trajectory returns to an attractor within a given time. The independence time can then be determined as the time scale at which the finite-time basin stability reaches its asymptotic value. Besides that, finite-time basin stability is a novel probabilistic stability measure on its own, with potential broad applications in complex systems.

  18. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  19. Solidification Microstructure, Segregation, and Shrinkage of Fe-Mn-C Twinning-Induced Plasticity Steel by Simulation and Experiment

    NASA Astrophysics Data System (ADS)

    Lan, Peng; Tang, Haiyan; Zhang, Jiaquan

    2016-06-01

    A 3D cellular automaton finite element model with full coupling of heat, flow, and solute transfer incorporating solidification grain nucleation and growth was developed for a multicomponent system. The predicted solidification process, shrinkage porosity, macrosegregation, grain orientation, and microstructure evolution of Fe-22Mn-0.7C twinning-induced plasticity (TWIP) steel match well with the experimental observation and measurement. Based on a new solute microsegregation model using the finite difference method, the thermophysical parameters including solid fraction, thermal conductivity, density, and enthalpy were predicted and compared with the results from thermodynamics and experiment. The effects of flow and solute transfer in the liquid phase on the solidification microstructure of Fe-22Mn-0.7C TWIP steel were compared numerically. Thermal convection decreases the temperature gradient in the liquid steel, leading to the enlargement of the equiaxed zone. Solute enrichment in front of the solid/liquid interface weakens the thermal convection, resulting in a little postponement of columnar-to-equiaxed transition (CET). The CET behavior of Fe-Mn-C TWIP steel during solidification was fully described and mathematically quantized by grain morphology statistics for the first time. A new methodology to figure out the CET location by linear regression of grain mean size with least-squares arithmetic was established, by which a composition design strategy for Fe-Mn-C TWIP steel according to solidification microstructure, matrix compactness, and homogeneity was developed.

  20. Cellular Decomposition Based Hybrid-Hierarchical Control Systems with Applications to Flight Management Systems

    NASA Technical Reports Server (NTRS)

    Caines, P. E.

    1999-01-01

    The work in this research project has been focused on the construction of a hierarchical hybrid control theory which is applicable to flight management systems. The motivation and underlying philosophical position for this work has been that the scale, inherent complexity and the large number of agents (aircraft) involved in an air traffic system imply that a hierarchical modelling and control methodology is required for its management and real time control. In the current work the complex discrete or continuous state space of a system with a small number of agents is aggregated in such a way that discrete (finite state machine or supervisory automaton) controlled dynamics are abstracted from the system's behaviour. High level control may then be either directly applied at this abstracted level, or, if this is in itself of significant complexity, further layers of abstractions may be created to produce a system with an acceptable degree of complexity at each level. By the nature of this construction, high level commands are necessarily realizable at lower levels in the system.

  1. Towards implementation of cellular automata in Microbial Fuel Cells.

    PubMed

    Tsompanas, Michail-Antisthenis I; Adamatzky, Andrew; Sirakoulis, Georgios Ch; Greenman, John; Ieropoulos, Ioannis

    2017-01-01

    The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway's Game of Life as the 'benchmark' CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions-compared to silicon circuitry-between the different states during computation.

  2. Towards implementation of cellular automata in Microbial Fuel Cells

    PubMed Central

    Adamatzky, Andrew; Sirakoulis, Georgios Ch.; Greenman, John; Ieropoulos, Ioannis

    2017-01-01

    The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway’s Game of Life as the ‘benchmark’ CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions—compared to silicon circuitry—between the different states during computation. PMID:28498871

  3. Toward an improvement over Kerner-Klenov-Wolf three-phase cellular automaton model.

    PubMed

    Jiang, Rui; Wu, Qing-Song

    2005-12-01

    The Kerner-Klenov-Wolf (KKW) three-phase cellular automaton model has a nonrealistic velocity of the upstream front in widening synchronized flow pattern which separates synchronized flow downstream and free flow upstream. This paper presents an improved model, which is a combination of the initial KKW model and a modified Nagel-Schreckenberg (MNS) model. In the improved KKW model, a parameter is introduced to determine the vehicle moves according to the MNS model or the initial KKW model. The improved KKW model can not only simulate the empirical observations as the initial KKW model, but also overcome the nonrealistic velocity problem. The mechanism of the improvement is discussed.

  4. Experimental Non-Violation of the Bell Inequality

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2018-05-01

    A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry $I_U$ in cosmological state space. Consistent with the assumed primacy of $I_U$, and $p$-adic number theory, a non-Euclidean (and hence non-classical) metric $g_p$ is defined on cosmological state space, where $p$ is a large but finite Pythagorean prime. Using number-theoretic properties of spherical triangles, the inequalities violated experimentally are shown to be $g_p$-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit $p=\\infty$, at which $g_p$ is Euclidean. Broader implications are discussed.

  5. Deterministic and stochastic bifurcations in the Hindmarsh-Rose neuronal model

    NASA Astrophysics Data System (ADS)

    Dtchetgnia Djeundam, S. R.; Yamapi, R.; Kofane, T. C.; Aziz-Alaoui, M. A.

    2013-09-01

    We analyze the bifurcations occurring in the 3D Hindmarsh-Rose neuronal model with and without random signal. When under a sufficient stimulus, the neuron activity takes place; we observe various types of bifurcations that lead to chaotic transitions. Beside the equilibrium solutions and their stability, we also investigate the deterministic bifurcation. It appears that the neuronal activity consists of chaotic transitions between two periodic phases called bursting and spiking solutions. The stochastic bifurcation, defined as a sudden change in character of a stochastic attractor when the bifurcation parameter of the system passes through a critical value, or under certain condition as the collision of a stochastic attractor with a stochastic saddle, occurs when a random Gaussian signal is added. Our study reveals two kinds of stochastic bifurcation: the phenomenological bifurcation (P-bifurcations) and the dynamical bifurcation (D-bifurcations). The asymptotical method is used to analyze phenomenological bifurcation. We find that the neuronal activity of spiking and bursting chaos remains for finite values of the noise intensity.

  6. Reliability Analysis of Retaining Walls Subjected to Blast Loading by Finite Element Approach

    NASA Astrophysics Data System (ADS)

    GuhaRay, Anasua; Mondal, Stuti; Mohiuddin, Hisham Hasan

    2018-02-01

    Conventional design methods adopt factor of safety as per practice and experience, which are deterministic in nature. The limit state method, though not completely deterministic, does not take into account effect of design parameters, which are inherently variable such as cohesion, angle of internal friction, etc. for soil. Reliability analysis provides a measure to consider these variations into analysis and hence results in a more realistic design. Several studies have been carried out on reliability of reinforced concrete walls and masonry walls under explosions. Also, reliability analysis of retaining structures against various kinds of failure has been done. However, very few research works are available on reliability analysis of retaining walls subjected to blast loading. Thus, the present paper considers the effect of variation of geotechnical parameters when a retaining wall is subjected to blast loading. However, it is found that the variation of geotechnical random variables does not have a significant effect on the stability of retaining walls subjected to blast loading.

  7. Dynamic Probabilistic Instability of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2009-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.

  8. An intrinsic poperty of memory of the Cellular automaton infrastructure of Nature leading to the organization of the physical world as an Internet o things; TOE = IOT

    NASA Astrophysics Data System (ADS)

    Berkovich, Simon

    2015-04-01

    The undamental advantage of a Cellular automaton construction foris that it can be viewed as an undetectable absolute frame o reference, in accordance with Lorentz-Poincare's interpretation.. The cellular automaton model for physical poblems comes upon two basic hurdles: (1) How to find the Elemental Rule that, and how to get non-locality from local transformations. Both problems are resolved considering the transfomation rule of mutual distributed synchronization Actually any information proessing device starts with a clocking system. and it turns out that ``All physical phenomena are different aspects of the high-level description of distributed mutual synchronization in a network of digital clocks''. Non-locality comes from two hugely different time-scales of signaling.. The universe is acombinines information and matter processes, These fast spreading diffusion wave solutions create the mechanism of the Holographic Universe. And thirdly Disengaged from synchronization, circular counters can perform memory functions by retaining phases of their oscillations, an idea of Von Neumann'. Thus, the suggested model generates the necessary constructs for the physical world as an Internet of Things. Life emerges due to the specifics of macromolecules that serve as communication means, with the holographic memory...

  9. Digitally programmable microfluidic automaton for multiscale combinatorial mixing and sample processing†

    PubMed Central

    Jensen, Erik C.; Stockton, Amanda M.; Chiesl, Thomas N.; Kim, Jungkyu; Bera, Abhisek; Mathies, Richard A.

    2013-01-01

    A digitally programmable microfluidic Automaton consisting of a 2-dimensional array of pneumatically actuated microvalves is programmed to perform new multiscale mixing and sample processing operations. Large (µL-scale) volume processing operations are enabled by precise metering of multiple reagents within individual nL-scale valves followed by serial repetitive transfer to programmed locations in the array. A novel process exploiting new combining valve concepts is developed for continuous rapid and complete mixing of reagents in less than 800 ms. Mixing, transfer, storage, and rinsing operations are implemented combinatorially to achieve complex assay automation protocols. The practical utility of this technology is demonstrated by performing automated serial dilution for quantitative analysis as well as the first demonstration of on-chip fluorescent derivatization of biomarker targets (carboxylic acids) for microchip capillary electrophoresis on the Mars Organic Analyzer. A language is developed to describe how unit operations are combined to form a microfluidic program. Finally, this technology is used to develop a novel microfluidic 6-sample processor for combinatorial mixing of large sets (>26 unique combinations) of reagents. The digitally programmable microfluidic Automaton is a versatile programmable sample processor for a wide range of process volumes, for multiple samples, and for different types of analyses. PMID:23172232

  10. Simulation of emotional contagion using modified SIR model: A cellular automaton approach

    NASA Astrophysics Data System (ADS)

    Fu, Libi; Song, Weiguo; Lv, Wei; Lo, Siuming

    2014-07-01

    Emotion plays an important role in the decision-making of individuals in some emergency situations. The contagion of emotion may induce either normal or abnormal consolidated crowd behavior. This paper aims to simulate the dynamics of emotional contagion among crowds by modifying the epidemiological SIR model to a cellular automaton approach. This new cellular automaton model, entitled the “CA-SIRS model”, captures the dynamic process ‘susceptible-infected-recovered-susceptible', which is based on SIRS contagion in epidemiological theory. Moreover, in this new model, the process is integrated with individual movement. The simulation results of this model show that multiple waves and dynamical stability around a mean value will appear during emotion spreading. It was found that the proportion of initial infected individuals had little influence on the final stable proportion of infected population in a given system, and that infection frequency increased with an increase in the average crowd density. Our results further suggest that individual movement accelerates the spread speed of emotion and increases the stable proportion of infected population. Furthermore, decreasing the duration of an infection and the probability of reinfection can markedly reduce the number of infected individuals. It is hoped that this study will be helpful in crowd management and evacuation organization.

  11. A novel upwind stabilized discontinuous finite element angular framework for deterministic dose calculations in magnetic fields.

    PubMed

    Yang, R; Zelyak, O; Fallone, B G; St-Aubin, J

    2018-01-30

    Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.

  12. A novel upwind stabilized discontinuous finite element angular framework for deterministic dose calculations in magnetic fields

    NASA Astrophysics Data System (ADS)

    Yang, R.; Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-02-01

    Angular discretization impacts nearly every aspect of a deterministic solution to the linear Boltzmann transport equation, especially in the presence of magnetic fields, as modeled by a streaming operator in angle. In this work a novel stabilization treatment of the magnetic field term is developed for an angular finite element discretization on the unit sphere, specifically involving piecewise partitioning of path integrals along curved element edges into uninterrupted segments of incoming and outgoing flux, with outgoing components updated iteratively. Correct order-of-accuracy for this angular framework is verified using the method of manufactured solutions for linear, quadratic, and cubic basis functions in angle. Higher order basis functions were found to reduce the error especially in strong magnetic fields and low density media. We combine an angular finite element mesh respecting octant boundaries on the unit sphere to spatial Cartesian voxel elements to guarantee an unambiguous transport sweep ordering in space. Accuracy for a dosimetrically challenging scenario involving bone and air in the presence of a 1.5 T parallel magnetic field is validated against the Monte Carlo package GEANT4. Accuracy and relative computational efficiency were investigated for various angular discretization parameters. 32 angular elements with quadratic basis functions yielded a reasonable compromise, with gamma passing rates of 99.96% (96.22%) for a 2%/2 mm (1%/1 mm) criterion. A rotational transformation of the spatial calculation geometry is performed to orient an arbitrary magnetic field vector to be along the z-axis, a requirement for a constant azimuthal angular sweep ordering. Working on the unit sphere, we apply the same rotational transformation to the angular domain to align its octants with the rotated Cartesian mesh. Simulating an oblique 1.5 T magnetic field against GEANT4 yielded gamma passing rates of 99.42% (95.45%) for a 2%/2 mm (1%/1 mm) criterion.

  13. Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion

    NASA Astrophysics Data System (ADS)

    Majda, Andrew J.; Tong, Xin T.

    2016-10-01

    Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.

  14. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  15. [Evaluation of the analyzer of hematology Beckman Coulter® HmX™ in the university hospital of Oran].

    PubMed

    Zmouli, N; Moulasserdoun, K; Seghier, F

    2013-11-01

    The choice of an automaton of haematology is a determining stage, which has to take into account at the same time the quality of the results and the economic imperatives: workload, structure and organization of the laboratory. [corrected] It is in this spirit that we estimated during a period of 3 months the analyzer of haematology: the HmX™ Coulter with boatman of samples of the company Beckman. This automaton realizes the blood numeration, the formula leukocytic and the reticulocyte count. At first, we estimated the appropriate characteristics of device. Secondly, we estimated the relevance, the sensibility and the specificity of the alarms by comparing with the reference method, which is the optical microscopy. For that purpose, 125 blood smears resulting from service of haematology and from resuscitation were examined in optical microscopy. The technical tests were realized according to the recommendations of the International committee for evaluation of automatons of haematology. The analytical performances were satisfactory in particular the big interval of linearity and the absence of contamination. As regards the evaluation of the alarms system: rate of rejection is 63%, the sensibility 86%, the specificity 70%, the positive predictive value 80%, the negative predictive value 78% and the efficiency 80%. The alarms myelaemia and atypical lymphocytes were never sources of false negatives. The alarms erythroblasts and platelet aggregates did not engendered positive forgery. The blast cell alarm was responsible for a single case of false negative. The faithfulness of automaton is satisfactory: the absence of contamination, the big interval of linearity for the leukocytes, the red blood cells and the platelets as well as a good relevance of the alarms with regard to the anomalies found on the peripheral blood smear. From the user-friendliness and practicability point of view, the HmX™ Coulter was deeply appreciated. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  16. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  17. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  18. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  19. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  20. Numerical modeling of the transmission dynamics of drug-sensitive and drug-resistant HSV-2

    NASA Astrophysics Data System (ADS)

    Gumel, A. B.

    2001-03-01

    A competitive finite-difference method will be constructed and used to solve a modified deterministic model for the spread of herpes simplex virus type-2 (HSV-2) within a given population. The model monitors the transmission dynamics and control of drug-sensitive and drug-resistant HSV-2. Unlike the fourth-order Runge-Kutta method (RK4), which fails when the discretization parameters exceed certain values, the novel numerical method to be developed in this paper gives convergent results for all parameter values.

  1. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  2. Classifying elementary cellular automata using compressibility, diversity and sensitivity measures

    NASA Astrophysics Data System (ADS)

    Ninagawa, Shigeru; Adamatzky, Andrew

    2014-10-01

    An elementary cellular automaton (ECA) is a one-dimensional, synchronous, binary automaton, where each cell update depends on its own state and states of its two closest neighbors. We attempt to uncover correlations between the following measures of ECA behavior: compressibility, sensitivity and diversity. The compressibility of ECA configurations is calculated using the Lempel-Ziv (LZ) compression algorithm LZ78. The sensitivity of ECA rules to initial conditions and perturbations is evaluated using Derrida coefficients. The generative morphological diversity shows how many different neighborhood states are produced from a single nonquiescent cell. We found no significant correlation between sensitivity and compressibility. There is a substantial correlation between generative diversity and compressibility. Using sensitivity, compressibility and diversity, we uncover and characterize novel groupings of rules.

  3. A cellular automaton model for evacuation flow using game theory

    NASA Astrophysics Data System (ADS)

    Guan, Junbiao; Wang, Kaihua; Chen, Fangyue

    2016-11-01

    Game theory serves as a good tool to explore crowd dynamic conflicts during evacuation processes. The purpose of this study is to simulate the complicated interaction behavior among the conflicting pedestrians in an evacuation flow. Two types of pedestrians, namely, defectors and cooperators, are considered, and two important factors including fear index and cost coefficient are taken into account. By combining the snowdrift game theory with a cellular automaton (CA) model, it is shown that the increase of fear index and cost coefficient will lengthen the evacuation time, which is more apparent for large values of cost coefficient. Meanwhile, it is found that the defectors to cooperators ratio could always tend to consistent states despite different values of parameters, largely owing to self-organization effects.

  4. Self-organisation in Cellular Automata with Coalescent Particles: Qualitative and Quantitative Approaches

    NASA Astrophysics Data System (ADS)

    Hellouin de Menibus, Benjamin; Sablik, Mathieu

    2017-06-01

    This article introduces new tools to study self-organisation in a family of simple cellular automata which contain some particle-like objects with good collision properties (coalescence) in their time evolution. We draw an initial configuration at random according to some initial shift-ergodic measure, and use the limit measure to describe the asymptotic behaviour of the automata. We first take a qualitative approach, i.e. we obtain information on the limit measure(s). We prove that only particles moving in one particular direction can persist asymptotically. This provides some previously unknown information on the limit measures of various deterministic and probabilistic cellular automata: 3 and 4-cyclic cellular automata [introduced by Fisch (J Theor Probab 3(2):311-338, 1990; Phys D 45(1-3):19-25, 1990)], one-sided captive cellular automata [introduced by Theyssier (Captive Cellular Automata, 2004)], the majority-traffic cellular automaton, a self stabilisation process towards a discrete line [introduced by Regnault and Rémila (in: Mathematical Foundations of Computer Science 2015—40th International Symposium, MFCS 2015, Milan, Italy, Proceedings, Part I, 2015)]. In a second time we restrict our study to a subclass, the gliders cellular automata. For this class we show quantitative results, consisting in the asymptotic law of some parameters: the entry times [generalising K ůrka et al. (in: Proceedings of AUTOMATA, 2011)], the density of particles and the rate of convergence to the limit measure.

  5. Anomalous finite-size effects in the Battle of the Sexes

    NASA Astrophysics Data System (ADS)

    Cremer, J.; Reichenbach, T.; Frey, E.

    2008-06-01

    The Battle of the Sexes describes asymmetric conflicts in mating behavior of males and females. Males can be philanderer or faithful, while females are either fast or coy, leading to a cyclic dynamics. The adjusted replicator equation predicts stable coexistence of all four strategies. In this situation, we consider the effects of fluctuations stemming from a finite population size. We show that they unavoidably lead to extinction of two strategies in the population. However, the typical time until extinction occurs strongly prolongs with increasing system size. In the emerging time window, a quasi-stationary probability distribution forms that is anomalously flat in the vicinity of the coexistence state. This behavior originates in a vanishing linear deterministic drift near the fixed point. We provide numerical data as well as an analytical approach to the mean extinction time and the quasi-stationary probability distribution.

  6. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  7. Recurrence time statistics of landslide events simulated by a cellular automaton model

    NASA Astrophysics Data System (ADS)

    Piegari, Ester; Di Maio, Rosa; Avella, Adolfo

    2014-05-01

    The recurrence time statistics of a cellular automaton modelling landslide events is analyzed by performing a numerical analysis in the parameter space and estimating Fano factor behaviors. The model is an extended version of the OFC model, which is a paradigm for SOC in non-conserved systems, but it works differently from the original OFC model as a finite value of the driving rate is applied. By driving the system to instability with different rates, the model exhibits a smooth transition from a correlated to an uncorrelated regime as the effect of a change in predominant mechanisms to propagate instability. If the rate at which instability is approached is small, chain processes dominate the landslide dynamics, and power laws govern probability distributions. However, the power-law regime typical of SOC-like systems is found in a range of return intervals that becomes shorter and shorter by increasing the values of the driving rates. Indeed, if the rates at which instability is approached are large, domino processes are no longer active in propagating instability, and large events simply occur because a large number of cells simultaneously reach instability. Such a gradual loss of the effectiveness of the chain propagation mechanism causes the system gradually enter to an uncorrelated regime where recurrence time distributions are characterized by Weibull behaviors. Simulation results are qualitatively compared with those from a recent analysis performed by Witt et al.(Earth Surf. Process. Landforms, 35, 1138, 2010) for the first complete databases of landslide occurrences over a period as large as fifty years. From the comparison with the extensive landslide data set, the numerical analysis suggests that statistics of such landslide data seem to be described by a crossover region between a correlated regime and an uncorrelated regime, where recurrence time distributions are characterized by power-law and Weibull behaviors for short and long return times, respectively. Finally, in such a region of the parameter space, clear indications of temporal correlations and clustering by the Fano factor behaviors support, at least in part, the analysis performed by Witt et al. (2010).

  8. Self-organized criticality in a two-dimensional cellular automaton model of a magnetic flux tube with background flow

    NASA Astrophysics Data System (ADS)

    Dănilă, B.; Harko, T.; Mocanu, G.

    2015-11-01

    We investigate the transition to self-organized criticality in a two-dimensional model of a flux tube with a background flow. The magnetic induction equation, represented by a partial differential equation with a stochastic source term, is discretized and implemented on a two-dimensional cellular automaton. The energy released by the automaton during one relaxation event is the magnetic energy. As a result of the simulations, we obtain the time evolution of the energy release, of the system control parameter, of the event lifetime distribution and of the event size distribution, respectively, and we establish that a self-organized critical state is indeed reached by the system. Moreover, energetic initial impulses in the magnetohydrodynamic flow can lead to one-dimensional signatures in the magnetic two-dimensional system, once the self-organized critical regime is established. The applications of the model for the study of gamma-ray bursts (GRBs) is briefly considered, and it is shown that some astrophysical parameters of the bursts, like the light curves, the maximum released energy and the number of peaks in the light curve can be reproduced and explained, at least on a qualitative level, by working in a framework in which the systems settles in a self-organized critical state via magnetic reconnection processes in the magnetized GRB fireball.

  9. Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong

    2018-02-01

    Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.

  10. A Cellular Automaton Framework for Infectious Disease Spread Simulation

    PubMed Central

    Pfeifer, Bernhard; Kugler, Karl; Tejada, Maria M; Baumgartner, Christian; Seger, Michael; Osl, Melanie; Netzer, Michael; Handler, Michael; Dander, Andreas; Wurz, Manfred; Graber, Armin; Tilg, Bernhard

    2008-01-01

    In this paper, a cellular automaton framework for processing the spatiotemporal spread of infectious diseases is presented. The developed environment simulates and visualizes how infectious diseases might spread, and hence provides a powerful instrument for health care organizations to generate disease prevention and contingency plans. In this study, the outbreak of an avian flu like virus was modeled in the state of Tyrol, and various scenarios such as quarantine, effect of different medications on viral spread and changes of social behavior were simulated. The proposed framework is implemented using the programming language Java. The set up of the simulation environment requires specification of the disease parameters and the geographical information using a population density colored map, enriched with demographic data. The results of the numerical simulations and the analysis of the computed parameters will be used to get a deeper understanding of how the disease spreading mechanisms work, and how to protect the population from contracting the disease. Strategies for optimization of medical treatment and vaccination regimens will also be investigated using our cellular automaton framework. In this study, six different scenarios were simulated. It showed that geographical barriers may help to slow down the spread of an infectious disease, however, when an aggressive and deadly communicable disease spreads, only quarantine and controlled medical treatment are able to stop the outbreak, if at all. PMID:19415136

  11. Mode identification using stochastic hybrid models with applications to conflict detection and resolution

    NASA Astrophysics Data System (ADS)

    Naseri Kouzehgarani, Asal

    2009-12-01

    Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.

  12. A DNA Logic Gate Automaton for Detection of Rabies and Other Lyssaviruses.

    PubMed

    Vijayakumar, Pavithra; Macdonald, Joanne

    2017-07-05

    Immediate activation of biosensors is not always desirable, particularly if activation is due to non-specific interactions. Here we demonstrate the use of deoxyribozyme-based logic gate networks arranged into visual displays to precisely control activation of biosensors, and demonstrate a prototype molecular automaton able to discriminate between seven different genotypes of Lyssaviruses, including Rabies virus. The device uses novel mixed-base logic gates to enable detection of the large diversity of Lyssavirus sequence populations, while an ANDNOT logic gate prevents non-specific activation across genotypes. The resultant device provides a user-friendly digital-like, but molecule-powered, dot-matrix text output for unequivocal results read-out that is highly relevant for point of care applications. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Simulation of the Burridge-Knopoff model of earthquakes with variable range stress transfer.

    PubMed

    Xia, Junchao; Gould, Harvey; Klein, W; Rundle, J B

    2005-12-09

    Simple models of earthquake faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton models have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff model, which includes more realistic friction forces and inertia. We find that the latter model with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton models and the usual Burridge-Knopoff model with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of earthquakes and other driven dissipative systems.

  14. An improved Burgers cellular automaton model for bicycle flow

    NASA Astrophysics Data System (ADS)

    Xue, Shuqi; Jia, Bin; Jiang, Rui; Li, Xingang; Shan, Jingjing

    2017-12-01

    As an energy-efficient and healthy transport mode, bicycling has recently attracted the attention of governments, transport planners, and researchers. The dynamic characteristics of the bicycle flow must be investigated to improve the facility design and traffic operation of bicycling. We model the bicycle flow by using an improved Burgers cellular automaton model. Through a following move mechanism, the modified model enables bicycles to move smoothly and increase the critical density to a more rational level than the original model. The model is calibrated and validated by using experimental data and field data. The results show that the improved model can effectively simulate the bicycle flow. The performance of the model under different parameters is investigated and discussed. Strengths and limitations of the improved model are suggested for future work.

  15. Modeling transport across the running-sandpile cellular automaton by means of fractional transport equations

    NASA Astrophysics Data System (ADS)

    Sánchez, R.; Newman, D. E.; Mier, J. A.

    2018-05-01

    Fractional transport equations are used to build an effective model for transport across the running sandpile cellular automaton [Hwa et al., Phys. Rev. A 45, 7002 (1992), 10.1103/PhysRevA.45.7002]. It is shown that both temporal and spatial fractional derivatives must be considered to properly reproduce the sandpile transport features, which are governed by self-organized criticality, at least over sufficiently long or large scales. In contrast to previous applications of fractional transport equations to other systems, the specifics of sand motion require in this case that the spatial fractional derivatives used for the running sandpile must be of the completely asymmetrical Riesz-Feller type. Appropriate values for the fractional exponents that define these derivatives in the case of the running sandpile are obtained numerically.

  16. Bypass transition and spot nucleation in boundary layers

    NASA Astrophysics Data System (ADS)

    Kreilos, Tobias; Khapko, Taras; Schlatter, Philipp; Duguet, Yohann; Henningson, Dan S.; Eckhardt, Bruno

    2016-08-01

    The spatiotemporal aspects of the transition to turbulence are considered in the case of a boundary-layer flow developing above a flat plate exposed to free-stream turbulence. Combining results on the receptivity to free-stream turbulence with the nonlinear concept of a transition threshold, a physically motivated model suggests a spatial distribution of spot nucleation events. To describe the evolution of turbulent spots a probabilistic cellular automaton is introduced, with all parameters directly obtained from numerical simulations of the boundary layer. The nucleation rates are then combined with the cellular automaton model, yielding excellent quantitative agreement with the statistical characteristics for different free-stream turbulence levels. We thus show how the recent theoretical progress on transitional wall-bounded flows can be extended to the much wider class of spatially developing boundary-layer flows.

  17. Iso-geometric analysis for neutron diffusion problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, S. K.; Eaton, M. D.; Williams, M. M. R.

    Iso-geometric analysis can be viewed as a generalisation of the finite element method. It permits the exact representation of a wider range of geometries including conic sections. This is possible due to the use of concepts employed in computer-aided design. The underlying mathematical representations from computer-aided design are used to capture both the geometry and approximate the solution. In this paper the neutron diffusion equation is solved using iso-geometric analysis. The practical advantages are highlighted by looking at the problem of a circular fuel pin in a square moderator. For this problem the finite element method requires the geometry tomore » be approximated. This leads to errors in the shape and size of the interface between the fuel and the moderator. In contrast to this iso-geometric analysis allows the interface to be represented exactly. It is found that, due to a cancellation of errors, the finite element method converges more quickly than iso-geometric analysis for this problem. A fuel pin in a vacuum was then considered as this problem is highly sensitive to the leakage across the interface. In this case iso-geometric analysis greatly outperforms the finite element method. Due to the improvement in the representation of the geometry iso-geometric analysis can outperform traditional finite element methods. It is proposed that the use of iso-geometric analysis on neutron transport problems will allow deterministic solutions to be obtained for exact geometries. Something that is only currently possible with Monte Carlo techniques. (authors)« less

  18. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  19. Chaotic Lagrangian models for turbulent relative dispersion.

    PubMed

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  20. Chaotic Lagrangian models for turbulent relative dispersion

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  1. Front propagation and clustering in the stochastic nonlocal Fisher equation

    NASA Astrophysics Data System (ADS)

    Ganan, Yehuda A.; Kessler, David A.

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  2. Front propagation and clustering in the stochastic nonlocal Fisher equation.

    PubMed

    Ganan, Yehuda A; Kessler, David A

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  3. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  4. Lyapunov exponents for infinite dimensional dynamical systems

    NASA Technical Reports Server (NTRS)

    Mhuiris, Nessan Mac Giolla

    1987-01-01

    Classically it was held that solutions to deterministic partial differential equations (i.e., ones with smooth coefficients and boundary data) could become random only through one mechanism, namely by the activation of more and more of the infinite number of degrees of freedom that are available to such a system. It is only recently that researchers have come to suspect that many infinite dimensional nonlinear systems may in fact possess finite dimensional chaotic attractors. Lyapunov exponents provide a tool for probing the nature of these attractors. This paper examines how these exponents might be measured for infinite dimensional systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Justin; Slaughter, Andrew; Veeraraghavan, Swetha

    Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON) is a finite element application that aims at analyzing the response of 3-D soil-structure systems to natural and man-made hazards such as earthquakes, floods and fire. MASTODON currently focuses on the simulation of seismic events and has the capability to perform extensive ‘source-to-site’ simulations including earthquake fault rupture, nonlinear wave propagation and nonlinear soil-structure interaction (NLSSI) analysis. MASTODON is being developed to be a dynamic probabilistic risk assessment framework that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment.

  6. Simulation and Experimental Studies on Grain Selection and Structure Design of the Spiral Selector for Casting Single Crystal Ni-Based Superalloy.

    PubMed

    Zhang, Hang; Xu, Qingyan

    2017-10-27

    Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter ( d w ), the spiral pitch ( h b ) and the spiral diameter ( h s ), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure.

  7. Simulation and Experimental Studies on Grain Selection and Structure Design of the Spiral Selector for Casting Single Crystal Ni-Based Superalloy

    PubMed Central

    Zhang, Hang; Xu, Qingyan

    2017-01-01

    Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter (dw), the spiral pitch (hb) and the spiral diameter (hs), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure. PMID:29077067

  8. Convergence Time and Phase Transition in a Non-monotonic Family of Probabilistic Cellular Automata

    NASA Astrophysics Data System (ADS)

    Ramos, A. D.; Leite, A.

    2017-08-01

    In dynamical systems, some of the most important questions are related to phase transitions and convergence time. We consider a one-dimensional probabilistic cellular automaton where their components assume two possible states, zero and one, and interact with their two nearest neighbors at each time step. Under the local interaction, if the component is in the same state as its two neighbors, it does not change its state. In the other cases, a component in state zero turns into a one with probability α , and a component in state one turns into a zero with probability 1-β . For certain values of α and β , we show that the process will always converge weakly to δ 0, the measure concentrated on the configuration where all the components are zeros. Moreover, the mean time of this convergence is finite, and we describe an upper bound in this case, which is a linear function of the initial distribution. We also demonstrate an application of our results to the percolation PCA. Finally, we use mean-field approximation and Monte Carlo simulations to show coexistence of three distinct behaviours for some values of parameters α and β.

  9. Algorithm for repairing the damaged images of grain structures obtained from the cellular automata and measurement of grain size

    NASA Astrophysics Data System (ADS)

    Ramírez-López, A.; Romero-Romo, M. A.; Muñoz-Negron, D.; López-Ramírez, S.; Escarela-Pérez, R.; Duran-Valencia, C.

    2012-10-01

    Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte Carlo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient to eliminate the errors and characterize the grain structures.

  10. Numerical Simulation and Optimization of Directional Solidification Process of Single Crystal Superalloy Casting

    PubMed Central

    Zhang, Hang; Xu, Qingyan; Liu, Baicheng

    2014-01-01

    The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535

  11. Formalization, implementation, and modeling of institutional controllers for distributed robotic systems.

    PubMed

    Pereira, José N; Silva, Porfírio; Lima, Pedro U; Martinoli, Alcherio

    2014-01-01

    The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets-an extension of Petri nets that takes into account robot actions and sensing-to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.

  12. Effect of Secondary Cooling Conditions on Solidification Structure and Central Macrosegregation in Continuously Cast High-Carbon Rectangular Billet

    NASA Astrophysics Data System (ADS)

    Zeng, Jie; Chen, Weiqing

    2015-10-01

    Solidification structures of high carbon rectangular billet with a size of 180 mm × 240 mm in different secondary cooling conditions were simulated using cellular automaton-finite element (CAFE) coupling model. The adequacy of the model was compared with the simulated and the actual macrostructures of 82B steel. Effects of the secondary cooling water intensity on solidification structures including the equiaxed grain ratio and the equiaxed grain compactness were discussed. It was shown that the equiaxed grain ratio and the equiaxed grain compactness changed in the opposite direction at different secondary cooling water intensities. Increasing the secondary cooling water intensity from 0.9 or 1.1 to 1.3 L/kg could improve the equiaxed grain compactness and decrease the equiaxed grain ratio. Besides, the industrial test was conducted to investigate the effect of different secondary cooling water intensities on the center carbon macrosegregation of 82B steel. The optimum secondary cooling water intensity was 0.9 L/kg, while the center carbon segregation degree was 1.10. The relationship between solidification structure and center carbon segregation was discussed based on the simulation results and the industrial test.

  13. Determination and controlling of grain structure of metals after laser incidence: Theoretical approach

    PubMed Central

    Dezfoli, Amir Reza Ansari; Hwang, Weng-Sing; Huang, Wei-Chin; Tsai, Tsung-Wen

    2017-01-01

    There are serious questions about the grain structure of metals after laser melting and the ways that it can be controlled. In this regard, the current paper explains the grain structure of metals after laser melting using a new model based on combination of 3D finite element (FE) and cellular automaton (CA) models validated by experimental observation. Competitive grain growth, relation between heat flows and grain orientation and the effect of laser scanning speed on final micro structure are discussed with details. Grains structure after laser melting is founded to be columnar with a tilt angle toward the direction of the laser movement. Furthermore, this investigation shows that the grain orientation is a function of conduction heat flux at molten pool boundary. Moreover, using the secondary laser heat source (SLHS) as a new approach to control the grain structure during the laser melting is presented. The results proved that the grain structure can be controlled and improved significantly using SLHS. Using SLHS, the grain orientation and uniformity can be change easily. In fact, this method can help us to produce materials with different local mechanical properties during laser processing according to their application requirements. PMID:28134347

  14. A symplectic integration method for elastic filaments

    NASA Astrophysics Data System (ADS)

    Ladd, Tony; Misra, Gaurav

    2009-03-01

    Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.

  15. A Critical Theory Perspective on Accelerated Learning.

    ERIC Educational Resources Information Center

    Brookfield, Stephen D.

    2003-01-01

    Critically analyzes accelerated learning using concepts from Herbert Marcuse (rebellious subjectivity) and Erich Fromm (automaton conformity). Concludes that, by providing distance and separation, accelerated learning has more potential to stimulate critical autonomous thought. (SK)

  16. Resonance, criticality, and emergence in city traffic investigated in cellular automaton models.

    PubMed

    Varas, A; Cornejo, M D; Toledo, B A; Muñoz, V; Rogan, J; Zarama, R; Valdivia, J A

    2009-11-01

    The complex behavior that occurs when traffic lights are synchronized is studied for a row of interacting cars. The system is modeled through a cellular automaton. Two strategies are considered: all lights in phase and a "green wave" with a propagating green signal. It is found that the mean velocity near the resonant condition follows a critical scaling law. For the green wave, it is shown that the mean velocity scaling law holds even for random separation between traffic lights and is not dependent on the density. This independence on car density is broken when random perturbations are considered in the car velocity. Random velocity perturbations also have the effect of leading the system to an emergent state, where cars move in clusters, but with an average velocity which is independent of traffic light switching for large injection rates.

  17. A Cellular Automaton model for pedestrian counterflow with swapping

    NASA Astrophysics Data System (ADS)

    Tao, Y. Z.; Dong, L. Y.

    2017-06-01

    In this paper, we propose a new floor field Cellular Automaton (CA) model with considering the swapping behaviors of pedestrians. The neighboring pedestrians in opposite directions take swapping in a probability decided by the linear density of pedestrian flow. The swapping which happens simultaneously with the normal movement is introduced to eliminate the gridlock in low density region. Numerical results show that the fundamental diagram is in good agreement with the measured data. Then the model is applied to investigate the counterflow and four typical states such as free flow, lane, intermediate and congestion states are found. More attention is paid on the intermediate state which lane-formation and local congestions switch in an irregular manner. The swapping plays a vital role in reducing the gridlock. Furthermore, the influence of the corridor size and individual's eyesight on counterflow are discussed in detail.

  18. Time-spatial model on the dynamics of the proliferation of Aedes aegypti

    NASA Astrophysics Data System (ADS)

    Gouvêa, Maury Meirelles, Jr.

    2017-03-01

    Some complex physical systems, such as cellular regulation, ecosystems, and societies, can be represented by local interactions between agents. Then, complex behaviors may emerge. A cellular automaton is a discrete dynamic system with these features. Among the several complex systems, epidemic diseases are given special attention by researchers with respect to their dynamics. Understanding the behavior of an epidemic may well benefit a society. For instance, different proliferation scenarios may be produced and a prevention policy set. This paper presents a new simulation method of the time-spatial spread of the Dengue mosquito with a cellular automaton. Thus, it will be possible to create different dissemination scenarios and preventive policies for these in several regions. Simulations were performed with different initial conditions and parameters as a result of which the behavior of the proposed method was characterized.

  19. Cellular automaton simulation examining progenitor hierarchy structure effects on mammary ductal carcinoma in situ.

    PubMed

    Bankhead, Armand; Magnuson, Nancy S; Heckendorn, Robert B

    2007-06-07

    A computer simulation is used to model ductal carcinoma in situ, a form of non-invasive breast cancer. The simulation uses known histological morphology, cell types, and stochastic cell proliferation to evolve tumorous growth within a duct. The ductal simulation is based on a hybrid cellular automaton design using genetic rules to determine each cell's behavior. The genetic rules are a mutable abstraction that demonstrate genetic heterogeneity in a population. Our goal was to examine the role (if any) that recently discovered mammary stem cell hierarchies play in genetic heterogeneity, DCIS initiation and aggressiveness. Results show that simpler progenitor hierarchies result in greater genetic heterogeneity and evolve DCIS significantly faster. However, the more complex progenitor hierarchy structure was able to sustain the rapid reproduction of a cancer cell population for longer periods of time.

  20. A Real Space Cellular Automaton Laboratory

    NASA Astrophysics Data System (ADS)

    Rozier, O.; Narteau, C.

    2013-12-01

    Investigations in geomorphology may benefit from computer modelling approaches that rely entirely on self-organization principles. In the vast majority of numerical models, instead, points in space are characterised by a variety of physical variables (e.g. sediment transport rate, velocity, temperature) recalculated over time according to some predetermined set of laws. However, there is not always a satisfactory theoretical framework from which we can quantify the overall dynamics of the system. For these reasons, we prefer to concentrate on interaction patterns using a basic cellular automaton modelling framework, the Real Space Cellular Automaton Laboratory (ReSCAL), a powerful and versatile generator of 3D stochastic models. The objective of this software suite released under a GNU license is to develop interdisciplinary research collaboration to investigate the dynamics of complex systems. The models in ReSCAL are essentially constructed from a small number of discrete states distributed on a cellular grid. An elementary cell is a real-space representation of the physical environment and pairs of nearest neighbour cells are called doublets. Each individual physical process is associated with a set of doublet transitions and characteristic transition rates. Using a modular approach, we can simulate and combine a wide range of physical, chemical and/or anthropological processes. Here, we present different ingredients of ReSCAL leading to applications in geomorphology: dune morphodynamics and landscape evolution. We also discuss how ReSCAL can be applied and developed across many disciplines in natural and human sciences.

  1. Bubonic plague: a metapopulation model of a zoonosis.

    PubMed Central

    Keeling, M J; Gilligan, C A

    2000-01-01

    Bubonic plague (Yersinia pestis) is generally thought of as a historical disease; however, it is still responsible for around 1000-3000 deaths each year worldwide. This paper expands the analysis of a model for bubonic plague that encompasses the disease dynamics in rat, flea and human populations. Some key variables of the deterministic model, including the force of infection to humans, are shown to be robust to changes in the basic parameters, although variation in the flea searching efficiency, and the movement rates of rats and fleas will be considered throughout the paper. The stochastic behaviour of the corresponding metapopulation model is discussed, with attention focused on the dynamics of rats and the force of infection at the local spatial scale. Short-lived local epidemics in rats govern the invasion of the disease and produce an irregular pattern of human cases similar to those observed. However, the endemic behaviour in a few rat subpopulations allows the disease to persist for many years. This spatial stochastic model is also used to identify the criteria for the spread to human populations in terms of the rat density. Finally, the full stochastic model is reduced to the form of a probabilistic cellular automaton, which allows the analysis of a large number of replicated epidemics in large populations. This simplified model enables us to analyse the spatial properties of rat epidemics and the effects of movement rates, and also to test whether the emergent metapopulation behaviour is a property of the local dynamics rather than the precise details of the model. PMID:11413636

  2. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  3. Rare events in finite and infinite dimensions

    NASA Astrophysics Data System (ADS)

    Reznikoff, Maria G.

    Thermal noise introduces stochasticity into deterministic equations and makes possible events which are never seen in the zero temperature setting. The driving force behind the thesis work is a desire to bring analysis and probability to bear on a class of relevant and intriguing physical problems, and in so doing, to allow applications to drive the development of new mathematical theory. The unifying theme is the study of rare events under the influence of small, random perturbations, and the manifold mathematical problems which ensue. In the first part, we apply large deviation theory and prefactor estimates to a coherent rotation micromagnetic model in order to analyze thermally activated magnetic switching. We consider recent physical experiments and the mathematical questions "asked" by them. A stochastic resonance type phenomenon is discovered, leading to the definition of finite temperature astroids. Non-Arrhenius behavior is discussed. The analysis is extended to ramped astroids. In addition, we discover that for low damping and ultrashort pulses, deterministic effects can override thermal effects, in accord with very recent ultrashort pulse experiments. Even more interesting, perhaps, is the study of large deviations in the infinite dimensional context, i.e. in spatially extended systems. Inspired by recent numerical investigations, we study the stochastically perturbed Allen Cahn and Cahn Hilliard equations. For the Allen Cahn equation, we study the action minimization problem (a deterministic variational problem) and prove the action scaling in four parameter regimes, via upper and lower bounds. The sharp interface limit is studied. We formally derive a reduced action functional which lends insight into the connection between action minimization and curvature flow. For the Cahn Hilliard equation, we prove upper and lower bounds for the scaling of the energy barrier in the nucleation and growth regime. Finally, we consider rare events in large or infinite domains, in one spatial dimension. We introduce a natural reference measure through which to analyze the invariant measure of stochastically perturbed, nonlinear partial differential equations. Also, for noisy reaction diffusion equations with an asymmetric potential, we discover how to rescale space and time in order to map the dynamics in the zero temperature limit to the Poisson Model, a simple version of the Johnson-Mehl-Avrami-Kolmogorov model for nucleation and growth.

  4. Implementation of a polling protocol for predicting celiac disease in videocapsule analysis

    PubMed Central

    Ciaccio, Edward J; Tennyson, Christina A; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2013-01-01

    AIM: To investigate the presence of small intestinal villous atrophy in celiac disease patients from quantitative analysis of videocapsule image sequences. METHODS: Nine celiac patient data with biopsy-proven villous atrophy and seven control patient data lacking villous atrophy were used for analysis. Celiacs had biopsy-proven disease with scores of Marsh II-IIIC except in the case of one hemophiliac patient. At four small intestinal levels (duodenal bulb, distal duodenum, jejunum, and ileum), video clips of length 200 frames (100 s) were analyzed. Twenty-four measurements were used for image characterization. These measurements were determined by quantitatively processing the videocapsule images via techniques for texture analysis, motility estimation, volumetric reconstruction using shape-from-shading principles, and image transformation. Each automated measurement method, or automaton, was polled as to whether or not villous atrophy was present in the small intestine, indicating celiac disease. Each automaton’s vote was determined based upon an optimized parameter threshold level, with the threshold levels being determined from prior data. A prediction of villous atrophy was made if it received the majority of votes (≥ 13), while no prediction was made for tie votes (12-12). Thus each set of images was classified as being from either a celiac disease patient or from a control patient. RESULTS: Separated by intestinal level, the overall sensitivity of automata polling for predicting villous atrophy and hence celiac disease was 83.9%, while the specificity was 92.9%, and the overall accuracy of automata-based polling was 88.1%. The method of image transformation yielded the highest sensitivity at 93.8%, while the method of texture analysis using subbands had the highest specificity at 76.0%. Similar results of prediction were observed at all four small intestinal locations, but there were more tie votes at location 4 (ileum). Incorrect prediction which reduced sensitivity occurred for two celiac patients with Marsh type II pattern, which is characterized by crypt hyperplasia, but normal villous architecture. Pooled from all levels, there was a mean of 14.31 ± 3.28 automaton votes for celiac vs 9.67 ± 3.31 automaton votes for control when celiac patient data was analyzed (P < 0.001). Pooled from all levels, there was a mean of 9.71 ± 2.8128 automaton votes for celiac vs 14.32 ± 2.7931 automaton votes for control when control patient data was analyzed (P < 0.001). CONCLUSION: Automata-based polling may be useful to indicate presence of mucosal atrophy, indicative of celiac disease, across the entire small bowel, though this must be confirmed in a larger patient set. Since the method is quantitative and automated, it can potentially eliminate observer bias and enable the detection of subtle abnormality in patients lacking a clear diagnosis. Our paradigm was found to be more efficacious at proximal small intestinal locations, which may suggest a greater presence and severity of villous atrophy at proximal as compared with distal locations. PMID:23858375

  5. Deterministic seismogenic scenarios based on asperities spatial distribution to assess tsunami hazard on northern Chile (18°S to 24°S)

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.

    2016-12-01

    Southern Peru and northern Chile coastal areas, extended between 12º to 24ºS, have been recognized as a mature seismic gap with a high seismogenic potential associated to seismic moment deficit accumulated since 1877. An important scientific question is which will be the breaking pattern of a future megathrust earthquake, being relevant from hazard assessment perspective. During the last decade, the occurrence of three major subduction earthquakes has given the possibility to acquire outstanding geophysical and geological information to know the behavior of phenomena. An interesting result is the relationship between the maximum slip areas and the spatial distribution of asperities in subduction zones. In this contribution, we propose a methodology to identify a regional pattern of main asperities to construct reliable seismogenic scenarios in a seismic gap. We follow a deterministic approach to explore the distribution of asperities segmentation using geophysical and geodetic data as trench-parallel gravity anomaly (TPGA), interseismic coupling (ISC), b-value, historical moment release, residual bathymetric and gravity anomalies. The combined information represents physical constraints for short and long term suitable regions for future mega earthquakes. To illuminate the asperities distribution, we construct profiles using fault coordinates, along-strike and down-dip direction, of all proxies to define the boundaries of a major asperities (> 100 km). The geometry of a major asperity is useful to define a finite set of future deterministic seismogenic scenarios to evaluate tsunamigenic hazard in main cities of northern zone of Chile (18°S to 24°S).

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrennikov, Andrei; Volovich, Yaroslav

    We analyze dynamical consequences of a conjecture that there exists a fundamental (indivisible) quant of time. In particular we study the problem of discrete energy levels of hydrogen atom. We are able to reconstruct potential which in discrete time formalism leads to energy levels of unperturbed hydrogen atom. We also consider linear energy levels of quantum harmonic oscillator and show how they are produced in the discrete time formalism. More generally, we show that in discrete time formalism finite motion in central potential leads to discrete energy spectrum, the property which is common for quantum mechanical theory. Thus deterministic (butmore » discrete time{exclamation_point}) dynamics is compatible with discrete energy levels.« less

  7. Theory of slightly fluctuating ratchets

    NASA Astrophysics Data System (ADS)

    Rozenbaum, V. M.; Shapochkina, I. V.; Lin, S. H.; Trakhtenberg, L. I.

    2017-04-01

    We consider a Brownian particle moving in a slightly fluctuating potential. Using the perturbation theory on small potential fluctuations, we derive a general analytical expression for the average particle velocity valid for both flashing and rocking ratchets with arbitrary, stochastic or deterministic, time dependence of potential energy fluctuations. The result is determined by the Green's function for diffusion in the time-independent part of the potential and by the features of correlations in the fluctuating part of the potential. The generality of the result allows describing complex ratchet systems with competing characteristic times; these systems are exemplified by the model of a Brownian photomotor with relaxation processes of finite duration.

  8. Probabilistic evaluation of SSME structural components

    NASA Astrophysics Data System (ADS)

    Rajagopal, K. R.; Newell, J. F.; Ho, H.

    1991-05-01

    The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.

  9. Chaotic Ising-like dynamics in traffic signals

    PubMed Central

    Suzuki, Hideyuki; Imura, Jun-ichi; Aihara, Kazuyuki

    2013-01-01

    The green and red lights of a traffic signal can be viewed as the up and down states of an Ising spin. Moreover, traffic signals in a city interact with each other, if they are controlled in a decentralised way. In this paper, a simple model of such interacting signals on a finite-size two-dimensional lattice is shown to have Ising-like dynamics that undergoes a ferromagnetic phase transition. Probabilistic behaviour of the model is realised by chaotic billiard dynamics that arises from coupled non-chaotic elements. This purely deterministic model is expected to serve as a starting point for considering statistical mechanics of traffic signals. PMID:23350034

  10. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  11. A cellular automaton model for neurogenesis in Drosophila

    NASA Astrophysics Data System (ADS)

    Luthi, Pascal O.; Chopard, Bastien; Preiss, Anette; Ramsden, Jeremy J.

    1998-07-01

    A cellular automaton (CA) is constructed for the formation of the central nervous system of the Drosophila embryo. This is an experimentally well-studied system in which complex interactions between neighbouring cells appear to drive their differentiation into different types. It appears that all the cells initially have the potential to become neuroblasts, and all strive to this end, but those which differentiate first block their as yet undifferentiated neighbours from doing so. The CA makes use of observational evidence for a lateral inhibition mechanism involving signalling products S of the ‘proneural’ or neuralizing genes. The key concept of the model is that cells are continuously producing S, but the production rate is lowered by inhibitory signals received from neighbouring cells which have advanced further along the developmental pathway. Comparison with experimental data shows that it well accounts for the observed proportion of neuroectodermal cells delaminating as neuroblasts.

  12. Molecular demultiplexer as a terminator automaton.

    PubMed

    Turan, Ilke S; Gunaydin, Gurcan; Ayan, Seylan; Akkaya, Engin U

    2018-02-23

    Molecular logic gates are expected to play an important role on the way to information processing therapeutic agents, especially considering the wide variety of physical and chemical responses that they can elicit in response to the inputs applied. Here, we show that a 1:2 demultiplexer based on a Zn 2+ -terpyridine-Bodipy conjugate with a quenched fluorescent emission, is efficient in photosensitized singlet oxygen generation as inferred from trap compound experiments and cell culture data. However, once the singlet oxygen generated by photosensitization triggers apoptotic response, the Zn 2+ complex then interacts with the exposed phosphatidylserine lipids in the external leaflet of the membrane bilayer, autonomously switching off singlet oxygen generation, and simultaneously switching on a bright emission response. This is the confirmatory signal of the cancer cell death by the action of molecular automaton and the confinement of unintended damage by excessive singlet oxygen production.

  13. Cellular automaton model for molecular traffic jams

    NASA Astrophysics Data System (ADS)

    Belitsky, V.; Schütz, G. M.

    2011-07-01

    We consider the time evolution of an exactly solvable cellular automaton with random initial conditions both in the large-scale hydrodynamic limit and on the microscopic level. This model is a version of the totally asymmetric simple exclusion process with sublattice parallel update and thus may serve as a model for studying traffic jams in systems of self-driven particles. We study the emergence of shocks from the microscopic dynamics of the model. In particular, we introduce shock measures whose time evolution we can compute explicitly, both in the thermodynamic limit and for open boundaries where a boundary-induced phase transition driven by the motion of a shock occurs. The motion of the shock, which results from the collective dynamics of the exclusion particles, is a random walk with an internal degree of freedom that determines the jump direction. This type of hopping dynamics is reminiscent of some transport phenomena in biological systems.

  14. Archaic man meets a marvellous automaton: posthumanism, social robots, archetypes.

    PubMed

    Jones, Raya

    2017-06-01

    Posthumanism is associated with critical explorations of how new technologies are rewriting our understanding of what it means to be human and how they might alter human existence itself. Intersections with analytical psychology vary depending on which technologies are held in focus. Social robotics promises to populate everyday settings with entities that have populated the imagination for millennia. A legend of A Marvellous Automaton appears as early as 350 B.C. in a book of Taoist teachings, and is joined by ancient and medieval legends of manmade humanoids coming to life, as well as the familiar robots of modern science fiction. However, while the robotics industry seems to be realizing an archetypal fantasy, the technology creates new social realities that generate distinctive issues of potential relevance for the theory and practice of analytical psychology. © 2017, The Society of Analytical Psychology.

  15. A High-Performance Cellular Automaton Model of Tumor Growth with Dynamically Growing Domains

    PubMed Central

    Poleszczuk, Jan; Enderling, Heiko

    2014-01-01

    Tumor growth from a single transformed cancer cell up to a clinically apparent mass spans many spatial and temporal orders of magnitude. Implementation of cellular automata simulations of such tumor growth can be straightforward but computing performance often counterbalances simplicity. Computationally convenient simulation times can be achieved by choosing appropriate data structures, memory and cell handling as well as domain setup. We propose a cellular automaton model of tumor growth with a domain that expands dynamically as the tumor population increases. We discuss memory access, data structures and implementation techniques that yield high-performance multi-scale Monte Carlo simulations of tumor growth. We discuss tumor properties that favor the proposed high-performance design and present simulation results of the tumor growth model. We estimate to which parameters the model is the most sensitive, and show that tumor volume depends on a number of parameters in a non-monotonic manner. PMID:25346862

  16. The giant acoustic atom - a single quantum system with a deterministic time delay

    NASA Astrophysics Data System (ADS)

    Guo, Lingzhen; Grimsmo, Arne; Frisk Kockum, Anton; Pletyukhov, Mikhail; Johansson, Göran

    2017-04-01

    We investigate the quantum dynamics of a single transmon qubit coupled to surface acoustic waves (SAWs) via two distant connection points. Since the acoustic speed is five orders of magnitude slower than the speed of light, the travelling time between the two connection points needs to be taken into account. Therefore, we treat the transmon qubit as a giant atom with a deterministic time delay. We find that the spontaneous emission of the system, formed by the giant atom and the SAWs between its connection points, initially follows a polynomial decay law instead of an exponential one, as would be the case for a small atom. We obtain exact analytical results for the scattering properties of the giant atom up to two-phonon processes by using a diagrammatic approach. The time delay gives rise to novel features in the reflection, transmission, power spectra, and second-order correlation functions of the system. Furthermore, we find the short-time dynamics of the giant atom for arbitrary drive strength by a numerically exact method for open quantum systems with a finite-time-delay feedback loop. L. G. acknowledges financial support from Carl-Zeiss Stiftung (0563-2.8/508/2).

  17. Uncertainty Quantification of Non-linear Oscillation Triggering in a Multi-injector Liquid-propellant Rocket Combustion Chamber

    NASA Astrophysics Data System (ADS)

    Popov, Pavel; Sideris, Athanasios; Sirignano, William

    2014-11-01

    We examine the non-linear dynamics of the transverse modes of combustion-driven acoustic instability in a liquid-propellant rocket engine. Triggering can occur, whereby small perturbations from mean conditions decay, while larger disturbances grow to a limit-cycle of amplitude that may compare to the mean pressure. For a deterministic perturbation, the system is also deterministic, computed by coupled finite-volume solvers at low computational cost for a single realization. The randomness of the triggering disturbance is captured by treating the injector flow rates, local pressure disturbances, and sudden acceleration of the entire combustion chamber as random variables. The combustor chamber with its many sub-fields resulting from many injector ports may be viewed as a multi-scale complex system wherein the developing acoustic oscillation is the emergent structure. Numerical simulation of the resulting stochastic PDE system is performed using the polynomial chaos expansion method. The overall probability of unstable growth is assessed in different regions of the parameter space. We address, in particular, the seven-injector, rectangular Purdue University experimental combustion chamber. In addition to the novel geometry, new features include disturbances caused by engine acceleration and unsteady thruster nozzle flow.

  18. Scoping analysis of the Advanced Test Reactor using SN2ND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolters, E.; Smith, M.; SC)

    2012-07-26

    A detailed set of calculations was carried out for the Advanced Test Reactor (ATR) using the SN2ND solver of the UNIC code which is part of the SHARP multi-physics code being developed under the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program in DOE-NE. The primary motivation of this work is to assess whether high fidelity deterministic transport codes can tackle coupled dynamics simulations of the ATR. The successful use of such codes in a coupled dynamics simulation can impact what experiments are performed and what power levels are permitted during those experiments at the ATR. The advantages of themore » SN2ND solver over comparable neutronics tools are its superior parallel performance and demonstrated accuracy on large scale homogeneous and heterogeneous reactor geometries. However, it should be noted that virtually no effort from this project was spent constructing a proper cross section generation methodology for the ATR usable in the SN2ND solver. While attempts were made to use cross section data derived from SCALE, the minimal number of compositional cross section sets were generated to be consistent with the reference Monte Carlo input specification. The accuracy of any deterministic transport solver is impacted by such an approach and clearly it causes substantial errors in this work. The reasoning behind this decision is justified given the overall funding dedicated to the task (two months) and the real focus of the work: can modern deterministic tools actually treat complex facilities like the ATR with heterogeneous geometry modeling. SN2ND has been demonstrated to solve problems with upwards of one trillion degrees of freedom which translates to tens of millions of finite elements, hundreds of angles, and hundreds of energy groups, resulting in a very high-fidelity model of the system unachievable by most deterministic transport codes today. A space-angle convergence study was conducted to determine the meshing and angular cubature requirements for the ATR, and also to demonstrate the feasibility of performing this analysis with a deterministic transport code capable of modeling heterogeneous geometries. The work performed indicates that a minimum of 260,000 linear finite elements combined with a L3T11 cubature (96 angles on the sphere) is required for both eigenvalue and flux convergence of the ATR. A critical finding was that the fuel meat and water channels must each be meshed with at least 3 'radial zones' for accurate flux convergence. A small number of 3D calculations were also performed to show axial mesh and eigenvalue convergence for a full core problem. Finally, a brief analysis was performed with different cross sections sets generated from DRAGON and SCALE, and the findings show that more effort will be required to improve the multigroup cross section generation process. The total number of degrees of freedom for a converged 27 group, 2D ATR problem is {approx}340 million. This number increases to {approx}25 billion for a 3D ATR problem. This scoping study shows that both 2D and 3D calculations are well within the capabilities of the current SN2ND solver, given the availability of a large-scale computing center such as BlueGene/P. However, dynamics calculations are not realistic without the implementation of improvements in the solver.« less

  19. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  20. Coevolutionary dynamics in large, but finite populations

    NASA Astrophysics Data System (ADS)

    Traulsen, Arne; Claussen, Jens Christian; Hauert, Christoph

    2006-07-01

    Coevolving and competing species or game-theoretic strategies exhibit rich and complex dynamics for which a general theoretical framework based on finite populations is still lacking. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection with two strategies in finite populations based on microscopic processes [A. Traulsen, J. C. Claussen, and C. Hauert, Phys. Rev. Lett. 95, 238701 (2005)]. Here we generalize this approach in a twofold way: First, we extend the framework to an arbitrary number of strategies and second, we allow for mutations in the evolutionary process. The deterministic limit of infinite population size of the frequency-dependent Moran process yields the adjusted replicator-mutator equation, which describes the combined effect of selection and mutation. For finite populations, we provide an extension taking random drift into account. In the limit of neutral selection, i.e., whenever the process is determined by random drift and mutations, the stationary strategy distribution is derived. This distribution forms the background for the coevolutionary process. In particular, a critical mutation rate uc is obtained separating two scenarios: above uc the population predominantly consists of a mixture of strategies whereas below uc the population tends to be in homogeneous states. For one of the fundamental problems in evolutionary biology, the evolution of cooperation under Darwinian selection, we demonstrate that the analytical framework provides excellent approximations to individual based simulations even for rather small population sizes. This approach complements simulation results and provides a deeper, systematic understanding of coevolutionary dynamics.

  1. A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS

    EPA Science Inventory

    We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...

  2. Finite Element Analysis of Reverberation Chambers

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.; Nguyen, Duc T.

    2000-01-01

    The primary motivating factor behind the initiation of this work was to provide a deterministic means of establishing the validity of the statistical methods that are recommended for the determination of fields that interact in -an avionics system. The application of finite element analysis to reverberation chambers is the initial step required to establish a reasonable course of inquiry in this particularly data-intensive study. The use of computational electromagnetics provides a high degree of control of the "experimental" parameters that can be utilized in a simulation of reverberating structures. As the work evolved there were four primary focus areas they are: 1. The eigenvalue problem for the source free problem. 2. The development of a complex efficient eigensolver. 3. The application of a source for the TE and TM fields for statistical characterization. 4. The examination of shielding effectiveness in a reverberating environment. One early purpose of this work was to establish the utility of finite element techniques in the development of an extended low frequency statistical model for reverberation phenomena. By employing finite element techniques, structures of arbitrary complexity can be analyzed due to the use of triangular shape functions in the spatial discretization. The effects of both frequency stirring and mechanical stirring are presented. It is suggested that for the low frequency operation the typical tuner size is inadequate to provide a sufficiently random field and that frequency stirring should be used. The results of the finite element analysis of the reverberation chamber illustrate io-W the potential utility of a 2D representation for enhancing the basic statistical characteristics of the chamber when operating in a low frequency regime. The basic field statistics are verified for frequency stirring over a wide range of frequencies. Mechanical stirring is shown to provide an effective frequency deviation.

  3. Discrete Event Supervisory Control Applied to Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Shah, Neerav

    2005-01-01

    The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.

  4. Anderson transition in a three-dimensional kicked rotor

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; García-García, Antonio M.

    2009-03-01

    We investigate Anderson localization in a three-dimensional (3D) kicked rotor. By a finite-size scaling analysis we identify a mobility edge for a certain value of the kicking strength k=kc . For k>kc dynamical localization does not occur, all eigenstates are delocalized and the spectral correlations are well described by Wigner-Dyson statistics. This can be understood by mapping the kicked rotor problem onto a 3D Anderson model (AM) where a band of metallic states exists for sufficiently weak disorder. Around the critical region k≈kc we carry out a detailed study of the level statistics and quantum diffusion. In agreement with the predictions of the one parameter scaling theory (OPT) and with previous numerical simulations, the number variance is linear, level repulsion is still observed, and quantum diffusion is anomalous with ⟨p2⟩∝t2/3 . We note that in the 3D kicked rotor the dynamics is not random but deterministic. In order to estimate the differences between these two situations we have studied a 3D kicked rotor in which the kinetic term of the associated evolution matrix is random. A detailed numerical comparison shows that the differences between the two cases are relatively small. However in the deterministic case only a small set of irrational periods was used. A qualitative analysis of a much larger set suggests that deviations between the random and the deterministic kicked rotor can be important for certain choices of periods. Heuristically it is expected that localization effects will be weaker in a nonrandom potential since destructive interference will be less effective to arrest quantum diffusion. However we have found that certain choices of irrational periods enhance Anderson localization effects.

  5. Infinite horizon optimal impulsive control with applications to Internet congestion control

    NASA Astrophysics Data System (ADS)

    Avrachenkov, Konstantin; Habachi, Oussama; Piunovskiy, Alexey; Zhang, Yi

    2015-04-01

    We investigate infinite-horizon deterministic optimal control problems with both gradual and impulsive controls, where any finitely many impulses are allowed simultaneously. Both discounted and long-run time-average criteria are considered. We establish very general and at the same time natural conditions, under which the dynamic programming approach results in an optimal feedback policy. The established theoretical results are applied to the Internet congestion control, and by solving analytically and nontrivially the underlying optimal control problems, we obtain a simple threshold-based active queue management scheme, which takes into account the main parameters of the transmission control protocols, and improves the fairness among the connections in a given network.

  6. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A system that retrieves problem reports from a NASA database is described. The database is queried with natural language questions. Part-of-speech tags are first assigned to each word in the question using a rule based tagger. A partial parse of the question is then produced with independent sets of deterministic finite state a utomata. Using partial parse information, a look up strategy searches the database for problem reports relevant to the question. A bigram stemmer and irregular verb conjugates have been incorporated into the system to improve accuracy. The system is evaluated by a set of fifty five questions posed by NASA engineers. A discussion of future research is also presented.

  7. Construction and comparison of parallel implicit kinetic solvers in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Titarev, Vladimir; Dumbser, Michael; Utyuzhnikov, Sergey

    2014-01-01

    The paper is devoted to the further development and systematic performance evaluation of a recent deterministic framework Nesvetay-3D for modelling three-dimensional rarefied gas flows. Firstly, a review of the existing discretization and parallelization strategies for solving numerically the Boltzmann kinetic equation with various model collision integrals is carried out. Secondly, a new parallelization strategy for the implicit time evolution method is implemented which improves scaling on large CPU clusters. Accuracy and scalability of the methods are demonstrated on a pressure-driven rarefied gas flow through a finite-length circular pipe as well as an external supersonic flow over a three-dimensional re-entry geometry of complicated aerodynamic shape.

  8. Energy-tunable sources of entangled photons: a viable concept for solid-state-based quantum relays.

    PubMed

    Trotta, Rinaldo; Martín-Sánchez, Javier; Daruka, Istvan; Ortix, Carmine; Rastelli, Armando

    2015-04-17

    We propose a new method of generating triggered entangled photon pairs with wavelength on demand. The method uses a microstructured semiconductor-piezoelectric device capable of dynamically reshaping the electronic properties of self-assembled quantum dots (QDs) via anisotropic strain engineering. Theoretical models based on k·p theory in combination with finite-element calculations show that the energy of the polarization-entangled photons emitted by QDs can be tuned in a range larger than 100 meV without affecting the degree of entanglement of the quantum source. These results pave the way towards the deterministic implementation of QD entanglement resources in all-electrically-controlled solid-state-based quantum relays.

  9. Energy-Tunable Sources of Entangled Photons: A Viable Concept for Solid-State-Based Quantum Relays

    NASA Astrophysics Data System (ADS)

    Trotta, Rinaldo; Martín-Sánchez, Javier; Daruka, Istvan; Ortix, Carmine; Rastelli, Armando

    2015-04-01

    We propose a new method of generating triggered entangled photon pairs with wavelength on demand. The method uses a microstructured semiconductor-piezoelectric device capable of dynamically reshaping the electronic properties of self-assembled quantum dots (QDs) via anisotropic strain engineering. Theoretical models based on k .p theory in combination with finite-element calculations show that the energy of the polarization-entangled photons emitted by QDs can be tuned in a range larger than 100 meV without affecting the degree of entanglement of the quantum source. These results pave the way towards the deterministic implementation of QD entanglement resources in all-electrically-controlled solid-state-based quantum relays.

  10. Finite-size scaling in the system of coupled oscillators with heterogeneity in coupling strength

    NASA Astrophysics Data System (ADS)

    Hong, Hyunsuk

    2017-07-01

    We consider a mean-field model of coupled phase oscillators with random heterogeneity in the coupling strength. The system that we investigate here is a minimal model that contains randomness in diverse values of the coupling strength, and it is found to return to the original Kuramoto model [Y. Kuramoto, Prog. Theor. Phys. Suppl. 79, 223 (1984), 10.1143/PTPS.79.223] when the coupling heterogeneity disappears. According to one recent paper [H. Hong, H. Chaté, L.-H. Tang, and H. Park, Phys. Rev. E 92, 022122 (2015), 10.1103/PhysRevE.92.022122], when the natural frequency of the oscillator in the system is "deterministically" chosen, with no randomness in it, the system is found to exhibit the finite-size scaling exponent ν ¯=5 /4 . Also, the critical exponent for the dynamic fluctuation of the order parameter is found to be given by γ =1 /4 , which is different from the critical exponents for the Kuramoto model with the natural frequencies randomly chosen. Originally, the unusual finite-size scaling behavior of the Kuramoto model was reported by Hong et al. [H. Hong, H. Chaté, H. Park, and L.-H. Tang, Phys. Rev. Lett. 99, 184101 (2007), 10.1103/PhysRevLett.99.184101], where the scaling behavior is found to be characterized by the unusual exponent ν ¯=5 /2 . On the other hand, if the randomness in the natural frequency is removed, it is found that the finite-size scaling behavior is characterized by a different exponent, ν ¯=5 /4 [H. Hong, H. Chaté, L.-H. Tang, and H. Park, Phys. Rev. E 92, 022122 (2015), 10.1103/PhysRevE.92.022122]. Those findings brought about our curiosity and led us to explore the effects of the randomness on the finite-size scaling behavior. In this paper, we pay particular attention to investigating the finite-size scaling and dynamic fluctuation when the randomness in the coupling strength is considered.

  11. Intelligent traffic signals : extending the range of self-organization in the BML model.

    DOT National Transportation Integrated Search

    2013-04-01

    The two-dimensional traffic model of Biham, Middleton and Levine (Phys. Rev. A, 1992) is : a simple cellular automaton that exhibits a wide range of complex behavior. It consists of both : northbound and eastbound cars traveling on a rectangular arra...

  12. Steampunk: Full Steam Ahead

    ERIC Educational Resources Information Center

    Campbell, Heather M.

    2010-01-01

    Steam-powered machines, anachronistic technology, clockwork automatons, gas-filled airships, tentacled monsters, fob watches, and top hats--these are all elements of steampunk. Steampunk is both speculative fiction that imagines technology evolved from steam-powered cogs and gears--instead of from electricity and computers--and a movement that…

  13. 1/f Noise in the ``Game of Life''

    NASA Astrophysics Data System (ADS)

    Andrecut, Mircea

    Conway's celebrated ``game of life'' cellular automaton possesses computational universality. The Fourier analysis reported here shows that the power spectra of the ``game of life'' exhibit 1/f noise. The obtained result suggests a connection between 1/f noise and computational universality.

  14. [History of robotics: from Archytas of Tarentum until da Vinci robot. (Part I)].

    PubMed

    Sánchez Martín, F M; Millán Rodríguez, F; Salvador Bayarri, J; Palou Redorta, J; Rodríguez Escovar, F; Esquena Fernández, S; Villavicencio Mavrich, H

    2007-02-01

    Robotic surgery is the newst technologic option in urology. To understand how new robots work is interesting to know their history. The desire to design machines imitating humans continued for more than 4000 years. There are references to King-su Tse (clasic China) making up automaton at 500 a. C. Archytas of Tarentum (at around 400 a.C.) is considered the father of mechanical engineering, and one of the occidental robotics classic referents. Heron of Alexandria, Hsieh-Fec, Al-Jazari, Roger Bacon, Juanelo Turriano, Leonardo da Vinci, Vaucanson o von Kempelen were robot inventors in the middle age, renaissance and classicism. At the XIXth century, automaton production underwent a peak and all engineering branches suffered a great development. At 1942 Asimov published the three robotics laws, based on mechanics, electronics and informatics advances. At XXth century robots able to do very complex self governing works were developed, like da Vinci Surgical System (Intuitive Surgical Inc, Sunnyvale, CA, USA), a very sophisticated robot to assist surgeons.

  15. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  16. Emergent 1d Ising Behavior in AN Elementary Cellular Automaton Model

    NASA Astrophysics Data System (ADS)

    Kassebaum, Paul G.; Iannacchione, Germano S.

    The fundamental nature of an evolving one-dimensional (1D) Ising model is investigated with an elementary cellular automaton (CA) simulation. The emergent CA simulation employs an ensemble of cells in one spatial dimension, each cell capable of two microstates interacting with simple nearest-neighbor rules and incorporating an external field. The behavior of the CA model provides insight into the dynamics of coupled two-state systems not expressible by exact analytical solutions. For instance, state progression graphs show the causal dynamics of a system through time in relation to the system's entropy. Unique graphical analysis techniques are introduced through difference patterns, diffusion patterns, and state progression graphs of the 1D ensemble visualizing the evolution. All analyses are consistent with the known behavior of the 1D Ising system. The CA simulation and new pattern recognition techniques are scalable (in both dimension, complexity, and size) and have many potential applications such as complex design of materials, control of agent systems, and evolutionary mechanism design.

  17. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  18. Macroscopic Spatial Complexity of the Game of Life Cellular Automaton: A Simple Data Analysis

    NASA Astrophysics Data System (ADS)

    Hernández-Montoya, A. R.; Coronel-Brizio, H. F.; Rodríguez-Achach, M. E.

    In this chapter we present a simple data analysis of an ensemble of 20 time series, generated by averaging the spatial positions of the living cells for each state of the Game of Life Cellular Automaton (GoL). We show that at the macroscopic level described by these time series, complexity properties of GoL are also presented and the following emergent properties, typical of data extracted complex systems such as financial or economical come out: variations of the generated time series following an asymptotic power law distribution, large fluctuations tending to be followed by large fluctuations, and small fluctuations tending to be followed by small ones, and fast decay of linear correlations, however, the correlations associated to their absolute variations exhibit a long range memory. Finally, a Detrended Fluctuation Analysis (DFA) of the generated time series, indicates that the GoL spatial macro states described by the time series are not either completely ordered or random, in a measurable and very interesting way.

  19. A cellular automaton method to simulate the microstructure and evolution of low-enriched uranium (LEU) U-Mo/Al dispersion type fuel plates

    NASA Astrophysics Data System (ADS)

    Drera, Saleem S.; Hofman, Gerard L.; Kee, Robert J.; King, Jeffrey C.

    2014-10-01

    Low-enriched uranium (LEU) fuel plates for high power materials test reactors (MTR) are composed of nominally spherical uranium-molybdenum (U-Mo) particles within an aluminum matrix. Fresh U-Mo particles typically range between 10 and 100 μm in diameter, with particle volume fractions up to 50%. As the fuel ages, reaction-diffusion processes cause the formation and growth of interaction layers that surround the fuel particles. The growth rate depends upon the temperature and radiation environment. The cellular automaton algorithm described in this paper can synthesize realistic random fuel-particle structures and simulate the growth of the intermetallic interaction layers. Examples in the present paper pack approximately 1000 particles into three-dimensional rectangular fuel structures that are approximately 1 mm on each side. The computational approach is designed to yield synthetic microstructures consistent with images from actual fuel plates and is validated by comparison with empirical data on actual fuel plates.

  20. Cellular automaton for migration in ecosystem: Application of traffic model to a predator-prey system

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi; Tainaka, Kei-ichi

    2018-01-01

    In most cases, physicists have studied the migration of biospecies by the use of random walk. In the present article, we apply cellular automaton of traffic model. For simplicity, we deal with an ecosystem contains a prey and predator, and use one-dimensional lattice with two layers. Preys stay on the first layer, but predators uni-directionally move on the second layer. The spatial and temporal evolution is numerically explored. It is shown that the migration has the important effect on populations of both prey and predator. Without migration, the phase transition between a prey-phase and coexisting-phase occurs. In contrast, the phase transition disappears by migration. This is because predator can survive due to migration. We find another phase transition for spatial distribution: in one phase, prey and predator form a stripe pattern of condensation and rarefaction, while in the other phase, they uniformly distribute. The self-organized stripe may be similar to the migration patterns in real ecosystems.

  1. A New Cellular Automaton Method Coupled with a Rate-dependent (CARD) Model for Predicting Dynamic Recrystallization Behavior

    NASA Astrophysics Data System (ADS)

    Azarbarmas, M.; Aghaie-Khafri, M.

    2018-03-01

    A comprehensive cellular automaton (CA) model should be coupled with a rate-dependent (RD) model for analyzing the RD deformation of alloys at high temperatures. In the present study, a new CA technique coupled with an RD model—namely, CARD—was developed. The proposed CARD model was used to simulate the dynamic recrystallization phenomenon during the hot deformation of the Inconel 718 superalloy. This model is capable of calculating the mean grain size and volume fraction of dynamic recrystallized grains, and estimating the phenomenological flow behavior of the material. In the presented model, an actual orientation definition comprising three Euler angles was used by implementing the electron backscatter diffraction data. For calculating the lattice rotation of grains, it was assumed that all slip systems of grains are active during the high-temperature deformation because of the intrinsic rate dependency of the procedure. Moreover, the morphological changes in grains were obtained using a topological module.

  2. 3D simulation of friction stir welding based on movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Eremina, Galina M.

    2017-12-01

    The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.

  3. Computational complexity of symbolic dynamics at the onset of chaos

    NASA Astrophysics Data System (ADS)

    Lakdawala, Porus

    1996-05-01

    In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behavior of cellular automata, that the computational basis for modeling this region is the universal Turing machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.

  4. On the limits of probabilistic forecasting in nonlinear time series analysis II: Differential entropy.

    PubMed

    Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki

    2017-08-01

    In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.

  5. Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.

    2017-10-01

    There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.

  6. Figures of merit for detectors in digital radiography. II. Finite number of secondaries and structured backgrounds.

    PubMed

    Pineda, Angel R; Barrett, Harrison H

    2004-02-01

    The current paradigm for evaluating detectors in digital radiography relies on Fourier methods. Fourier methods rely on a shift-invariant and statistically stationary description of the imaging system. The theoretical justification for the use of Fourier methods is based on a uniform background fluence and an infinite detector. In practice, the background fluence is not uniform and detector size is finite. We study the effect of stochastic blurring and structured backgrounds on the correlation between Fourier-based figures of merit and Hotelling detectability. A stochastic model of the blurring leads to behavior similar to what is observed by adding electronic noise to the deterministic blurring model. Background structure does away with the shift invariance. Anatomical variation makes the covariance matrix of the data less amenable to Fourier methods by introducing long-range correlations. It is desirable to have figures of merit that can account for all the sources of variation, some of which are not stationary. For such cases, we show that the commonly used figures of merit based on the discrete Fourier transform can provide an inaccurate estimate of Hotelling detectability.

  7. Multidisciplinary insight into clonal expansion of HTLV-1-infected cells in adult T-cell leukemia via modeling by deterministic finite automata coupled with high-throughput sequencing.

    PubMed

    Farmanbar, Amir; Firouzi, Sanaz; Park, Sung-Joon; Nakai, Kenta; Uchimaru, Kaoru; Watanabe, Toshiki

    2017-01-31

    Clonal expansion of leukemic cells leads to onset of adult T-cell leukemia (ATL), an aggressive lymphoid malignancy with a very poor prognosis. Infection with human T-cell leukemia virus type-1 (HTLV-1) is the direct cause of ATL onset, and integration of HTLV-1 into the human genome is essential for clonal expansion of leukemic cells. Therefore, monitoring clonal expansion of HTLV-1-infected cells via isolation of integration sites assists in analyzing infected individuals from early infection to the final stage of ATL development. However, because of the complex nature of clonal expansion, the underlying mechanisms have yet to be clarified. Combining computational/mathematical modeling with experimental and clinical data of integration site-based clonality analysis derived from next generation sequencing technologies provides an appropriate strategy to achieve a better understanding of ATL development. As a comprehensively interdisciplinary project, this study combined three main aspects: wet laboratory experiments, in silico analysis and empirical modeling. We analyzed clinical samples from HTLV-1-infected individuals with a broad range of proviral loads using a high-throughput methodology that enables isolation of HTLV-1 integration sites and accurate measurement of the size of infected clones. We categorized clones into four size groups, "very small", "small", "big", and "very big", based on the patterns of clonal growth and observed clone sizes. We propose an empirical formal model based on deterministic finite state automata (DFA) analysis of real clinical samples to illustrate patterns of clonal expansion. Through the developed model, we have translated biological data of clonal expansion into the formal language of mathematics and represented the observed clonality data with DFA. Our data suggest that combining experimental data (absolute size of clones) with DFA can describe the clonality status of patients. This kind of modeling provides a basic understanding as well as a unique perspective for clarifying the mechanisms of clonal expansion in ATL.

  8. Automatic procedures generator for orbital rendezvous maneuver

    NASA Technical Reports Server (NTRS)

    Kohn, W.; Van Valkenburg, J. A.; Dunn, C. K.

    1985-01-01

    This paper describes the development of an expert system for defining and dynamically updating procedures for an orbital rendezvous maneuver. The product of the expert system is a procedure represented by a Moore automaton. The construction is recursive and driven by a simulation of the rendezvousing bodies.

  9. Cellular Automata and the Humanities.

    ERIC Educational Resources Information Center

    Gallo, Ernest

    1994-01-01

    The use of cellular automata to analyze several pre-Socratic hypotheses about the evolution of the physical world is discussed. These hypotheses combine characteristics of both rigorous and metaphoric language. Since the computer demands explicit instructions for each step in the evolution of the automaton, such models can reveal conceptual…

  10. Academetron, Automaton, Phantom: Uncanny Digital Pedagogies

    ERIC Educational Resources Information Center

    Bayne, Sian

    2010-01-01

    This paper explores the possibility of an uncanny digital pedagogy. Drawing on theories of the uncanny from psychoanalysis, cultural studies and educational philosophy, it considers how being online defamiliarises teaching, asking us to question and consider anew established academic practices and conventions. It touches on recent thinking on…

  11. Comparing reactive and memory-one strategies of direct reciprocity

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.

    2016-05-01

    Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.

  12. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  13. Modeling of Magnetoelastic Nanostructures with a Fully-coupled Mechanical-Micromagnetic Model and Its Applications

    NASA Astrophysics Data System (ADS)

    Liang, Cheng-Yen

    Micromagnetic simulations of magnetoelastic nanostructures traditionally rely on either the Stoner-Wohlfarth model or the Landau-Lifshitz-Gilbert (LLG) model assuming uniform strain (and/or assuming uniform magnetization). While the uniform strain assumption is reasonable when modeling magnetoelastic thin films, this constant strain approach becomes increasingly inaccurate for smaller in-plane nanoscale structures. In this dissertation, a fully-coupled finite element micromagnetic method is developed. The method deals with the micromagnetics, elastodynamics, and piezoelectric effects. The dynamics of magnetization, non-uniform strain distribution, and electric fields are iteratively solved. This more sophisticated modeling technique is critical for guiding the design process of the nanoscale strain-mediated multiferroic elements such as those needed in multiferroic systems. In this dissertation, we will study magnetic property changes (e.g., hysteresis, coercive field, and spin states) due to strain effects in nanostructures. in addition, a multiferroic memory device is studied. The electric-field-driven magnetization switching by applying voltage on patterned electrodes simulation in a nickel memory device is shown in this work. The deterministic control law for the magnetization switching in a nanoring with electric field applied to the patterned electrodes is investigated. Using the patterned electrodes, we show that strain-induced anisotropy is able to be controlled, which changes the magnetization deterministically in a nano-ring.

  14. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  15. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  16. Comparing reactive and memory-one strategies of direct reciprocity

    PubMed Central

    Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.

    2016-01-01

    Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation. PMID:27161141

  17. Scaling relationships of channel networks at large scales: Examples from two large-magnitude watersheds in Brittany, France

    NASA Astrophysics Data System (ADS)

    Crave, A.; Davy, P.

    1997-01-01

    We present a statistical analysis on two watersheds in French Brittany whose drainage areas are about 10,000 and 2000 km2. The channel system was analysed from the digitised blue lines of the 1:100,000 map and from a 250-m DEM. Link lengths follow an exponential distribution, consistent with the Markovian model of channel branching proposed by Smart (1968). The departure from the exponential distribution for small lengths, that has been extensively discussed before, results from a statistical effect due to the finite number of channels and junctions. The Strahler topology applied on channels defines a self-similar organisation whose similarity dimension is about 1.7, that is clearly smaller than the value of 2 expected for a random organisation. The similarity dimension is consistent with an independent measurement of the Horton ratios of stream numbers and lengths. The variables defined by an upstream integral (drainage area, mainstream length, upstream length) follow power-law distributions limited at large scales by a finite size effect, due to the finite area of the watersheds. A special emphasis is given to the exponent of the drainage area, aA, that has been previously discussed in the context of different aggregation models relevant to channel network growth. We show that aA is consistent with 4/3, a value that was obtained and analytically demonstrated from directed random walk aggregating models, inspired by the model of Scheidegger (1967). The drainage density and mainstream length present no simple scaling with area, except at large areas where they tend to trivial values: constant density and square root of drainage area, respectively. These asymptotic limits necessarily imply that the space dimension of channel networks is 2, equal to the embedding space. The limits are reached for drainage areas larger than 100 km2. For smaller areas, the asymptotic limit represents either a lower bound (drainage density) or an upper bound (mainstream length) of the distributions. Because the fluctuations of the drainage density slowly converge to a finite limit, the system could be adequately described as a fat fractal, where the average drainage density is the sum of a constant plus a fluctuation decreasing as a power law with integrating area. A fat fractal hypothesis could explain why the similarity dimension is not equal to the fractal capacity dimension, as it is for thin fractals. The physical consequences are not yet really understood, but we draw an analogy with a directed aggregating system where the growth process involves both stochastic and deterministic growth. These models are known to be fat fractals, and the deterministic growth, which constitutes a fundamental ingredient of these models, could be attributed in river systems to the role of terrestrial gravity.

  18. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  19. Generation of deterministic tsunami hazard maps in the Bay of Cadiz, south-west Spain

    NASA Astrophysics Data System (ADS)

    Álvarez-Gómez, J. A.; Otero, L.; Olabarrieta, M.; González, M.; Carreño, E.; Baptista, M. A.; Miranda, J. M.; Medina, R.; Lima, V.

    2009-04-01

    The bay of Cádiz is a densely populated and industrialized area, and an important centre of tourism which multiplies its population in the summer months. This bay is situated in the Gulf of Cádiz, the south-west Atlantic margin of the Iberian Peninsula. From a tectonic point of view this area can be defined as a diffuse plate boundary, comprising the eastern edge of the Gloria and Tydeman transforms (where the deformation is mainly concentrated in these shear corridors), the Gorringe Bank, the Horseshoe Abyssal plain, the Portimao and Guadalquivir banks, and the western termination of the arcuated Gibraltar Arc. This deformation zone is the eastern edge of the Azores - Gibraltar seismic zone, being the present day boundary between the Eurasian and African plates. The motion between the plates is mainly convergent in the Gulf of Cádiz, but gradually changes to almost pure transcurrent along the Gloria Fault. The relative motion between the two plates is of the order of 4-5 mm/yr. In order to define the different tsunamigenic zones and to characterize its worst tsunamigenic source we have used seismic, structural and geological data. The numerical model used to simulate the wave propagation and coastal inundation is the C3 (Cantabria, COMCOT and Tsunami-Claw) model. C3 is a hybrid finite difference-finite volume method which balances between efficiency and accuracy. For offshore domain in deep waters the model applies an explicit finite difference scheme (FD), which is computationally fast and accurate in large grids. For near coast domains in coastal areas, it applies a finite volume scheme (VOF). It solves correctly the bore formation and the bore propagation. It is very effective solving the run-up and the run down. A set of five worst case tsunamigenic sources has been used with four different sea levels (minimum tide, most probable low tide, most probable high tide and maximum tide), in order to produce the following thematic maps with the C3 model: maximum free surface elevation, maximum water depth, maximum current speed, maximum Froude number and maximum impact forces (hydrostatic and dynamic forces). The fault rupture and sea bottom displacement has been computed by means of the Okada equations. As result, a set of more than 100 deterministic thematic maps have been created in a GIS environment incorporating geographical data and high resolution orthorectified satellite images. These thematic maps form an atlas of inundation maps that will be distributed to different government authorities and civil protection and emergency agencies. The authors gratefully acknowledge the financial support provided by the EU under the frame of the European Project TRANSFER (Tsunami Risk And Strategies For the European Region), 6th Framework Programme.

  20. Self-organized criticality in forest-landscape evolution

    Treesearch

    J.C. Sprott; Janine Bolliger; David J. Mladenoff

    2002-01-01

    A simple cellular automaton replicates the fractal pattern of a natural forest landscape and predicts its evolution. Spatial distributions and temporal fluctuations in global quantities show power-law spectra, implying scale-invariance, characteristic of self-organized criticality. The evolution toward the SOC state and the robustness of that state to perturbations...

  1. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Chaitin, L. J.; Duda, R. O.; Johanson, P. A.; Raphael, B.; Rosen, C. A.; Yates, R. A.

    1970-01-01

    The program is reported for developing techniques in artificial intelligence and their application to the control of mobile automatons for carrying out tasks autonomously. Visual scene analysis, short-term problem solving, and long-term problem solving are discussed along with the PDP-15 simulator, LISP-FORTRAN-MACRO interface, resolution strategies, and cost effectiveness.

  2. Application of Non-Deterministic Methods to Assess Modeling Uncertainties for Reinforced Carbon-Carbon Debris Impacts

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan

    2004-01-01

    The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.

  3. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  4. Optimal Protocols and Optimal Transport in Stochastic Thermodynamics

    NASA Astrophysics Data System (ADS)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-01

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  5. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  6. Optimal protocols and optimal transport in stochastic thermodynamics.

    PubMed

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-24

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  7. Fracture Capabilities in Grizzly with the extended Finite Element Method (X-FEM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolbow, John; Zhang, Ziyu; Spencer, Benjamin

    Efforts are underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). A capability was previously developed to calculate three-dimensional interaction- integrals to extract mixed-mode stress-intensity factors. This capability requires the use of a finite element mesh that conforms to the crack geometry. The eXtended Finite Element Method (X-FEM) provides a means to represent a crack geometry without explicitly fitting the finite element mesh to it. This is effected by enhancing the element kinematics to represent jump discontinuities at arbitrary locations inside ofmore » the element, as well as the incorporation of asymptotic near-tip fields to better capture crack singularities. In this work, use of only the discontinuous enrichment functions was examined to see how accurate stress intensity factors could still be calculated. This report documents the following work to enhance Grizzly’s engineering fracture capabilities by introducing arbitrary jump discontinuities for prescribed crack geometries; X-FEM Mesh Cutting in 3D: to enhance the kinematics of elements that are intersected by arbitrary crack geometries, a mesh cutting algorithm was implemented in Grizzly. The algorithm introduces new virtual nodes and creates partial elements, and then creates a new mesh connectivity; Interaction Integral Modifications: the existing code for evaluating the interaction integral in Grizzly was based on the assumption of a mesh that was fitted to the crack geometry. Modifications were made to allow for the possibility of a crack front that passes arbitrarily through the mesh; and Benchmarking for 3D Fracture: the new capabilities were benchmarked against mixed-mode three-dimensional fracture problems with known analytical solutions.« less

  8. Teaching Note-Teaching Student Interviewing Competencies through Second Life

    ERIC Educational Resources Information Center

    Tandy, Cynthia; Vernon, Robert; Lynch, Darlene

    2017-01-01

    A prototype standardized client was created and programmed to respond to students in the 3D virtual world of Second Life. This automaton, called a "chatbot," was repeatedly interviewed by beginning MSW students in a practice course as a learning exercise. Initial results were positive and suggest the use of simulated clients in virtual…

  9. A Nanoflare-Based Cellular Automaton Model and the Observed Properties of the Coronal Plasma

    NASA Technical Reports Server (NTRS)

    Lopez-Fuentes, Marcelo; Klimchuk, James Andrew

    2016-01-01

    We use the cellular automaton model described in Lopez Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDOAIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10 percent - 15 percent both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  10. Mapping Thermal Habitat of Ectotherms Based on Behavioral Thermoregulation in a Controlled Thermal Environment

    NASA Astrophysics Data System (ADS)

    Fei, T.; Skidmore, A.; Liu, Y.

    2012-07-01

    Thermal environment is especially important to ectotherm because a lot of physiological functions rely on the body temperature such as thermoregulation. The so-called behavioural thermoregulation function made use of the heterogeneity of the thermal properties within an individual's habitat to sustain the animal's physiological processes. This function links the spatial utilization and distribution of individual ectotherm with the thermal properties of habitat (thermal habitat). In this study we modelled the relationship between the two by a spatial explicit model that simulates the movements of a lizard in a controlled environment. The model incorporates a lizard's transient body temperatures with a cellular automaton algorithm as a way to link the physiology knowledge of the animal with the spatial utilization of its microhabitat. On a larger spatial scale, 'thermal roughness' of the habitat was defined and used to predict the habitat occupancy of the target species. The results showed the habitat occupancy can be modelled by the cellular automaton based algorithm at a smaller scale, and can be modelled by the thermal roughness index at a larger scale.

  11. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  12. Simulation study of overtaking in pedestrian flow using floor field cellular automaton model

    NASA Astrophysics Data System (ADS)

    Fu, Zhijian; Xia, Liang; Yang, Hongtai; Liu, Xiaobo; Ma, Jian; Luo, Lin; Yang, Lizhong; Chen, Junmin

    Properties of pedestrian may change along the moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study tactical overtaking in pedestrian flow. That is difficult to be modeled using a microscopic discrete model because of the complexity of the detailed overtaking behavior, and crossing/overlaps of pedestrian routes. Thus, a multi-velocity floor field cellular automaton model explaining the detailed psychical process of overtaking decision was proposed. Pedestrian can be either in normal state or in tactical overtaking state. Without tactical decision, pedestrians in normal state are driven by the floor field. Pedestrians make their tactical overtaking decisions by evaluating the walking environment around the overtaking route (the average velocity and density around the route, visual field of pedestrian) and obstructing conditions (the distance and velocity difference between the overtaking pedestrian and the obstructing pedestrian). The effects of tactical overtaking ratio, free velocity dispersion, and visual range on fundamental diagram, conflict density, and successful overtaking ratio were explored. Besides, the sensitivity analysis of the route factor relative intensity was performed.

  13. A NANOFLARE-BASED CELLULAR AUTOMATON MODEL AND THE OBSERVED PROPERTIES OF THE CORONAL PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuentes, Marcelo López; Klimchuk, James A., E-mail: lopezf@iafe.uba.ar

    2016-09-10

    We use the cellular automaton model described in López Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode /XRT and SDO /AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasmamore » in AR coronal loops. The typical intensity fluctuations have amplitudes of 10%–15% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.« less

  14. A cellular automaton method to simulate the microstructure and evolution of low-enriched uranium (LEU) U–Mo/Al dispersion type fuel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drera, Saleem S.; Hofman, Gerard L.; Kee, Robert J.

    Low-enriched uranium (LEU) fuel plates for high power materials test reactors (MTR) are composed of nominally spherical uranium-molybdenum (U-Mo) particles within an aluminum matrix. Fresh U-Mo particles typically range between 10 and 100 mu m in diameter, with particle volume fractions up to 50%. As the fuel ages, reaction-diffusion processes cause the formation and growth of interaction layers that surround the fuel particles. The growth rate depends upon the temperature and radiation environment. The cellular automaton algorithm described in this paper can synthesize realistic random fuel-particle structures and simulate the growth of the intermetallic interaction layers. Examples in the presentmore » paper pack approximately 1000 particles into three-dimensional rectangular fuel structures that are approximately 1 mm on each side. The computational approach is designed to yield synthetic microstructures consistent with images from actual fuel plates and is validated by comparison with empirical data on actual fuel plates. (C) 2014 Elsevier B.V. All rights reserved.« less

  15. Larger than Life's Extremes: Rigorous Results for Simplified Rules and Speculation on the Phase Boundaries

    NASA Astrophysics Data System (ADS)

    Evans, Kellie Michele

    Larger than Life (LtL), is a four-parameter family of two-dimensional cellular automata that generalizes John Conway's Game of Life (Life) to large neighborhoods and general birth and survival thresholds. LtL was proposed by David Griffeath in the early 1990s to explore whether Life might be a clue to a critical phase point in the threshold-range scaling limit. The LtL family of rules includes Life as well as a rich set of two-dimensional rules, some of which exhibit dynamics vastly different from Life. In this chapter we present rigorous results and conjectures about the ergodic classifications of several sets of "simplified" LtL rules, each of which has a property that makes the rule easier to analyze. For example, these include symmetric rules such as the threshold voter automaton and the anti-voter automaton, monotone rules such as the threshold growth models, and others. We also provide qualitative results and speculation about LtL rules on various phase boundaries and summarize results and open questions about our favorite "Life-like" LtL rules.

  16. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  17. A WENO-solver combined with adaptive momentum discretization for the Wigner transport equation and its application to resonant tunneling diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorda, Antonius, E-mail: dorda@tugraz.at; Schürrer, Ferdinand, E-mail: ferdinand.schuerrer@tugraz.at

    2015-03-01

    We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of themore » phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations.« less

  18. Forecasting Epidemics Through Nonparametric Estimation of Time-Dependent Transmission Rates Using the SEIR Model.

    PubMed

    Smirnova, Alexandra; deCamp, Linda; Chowell, Gerardo

    2017-05-02

    Deterministic and stochastic methods relying on early case incidence data for forecasting epidemic outbreaks have received increasing attention during the last few years. In mathematical terms, epidemic forecasting is an ill-posed problem due to instability of parameter identification and limited available data. While previous studies have largely estimated the time-dependent transmission rate by assuming specific functional forms (e.g., exponential decay) that depend on a few parameters, here we introduce a novel approach for the reconstruction of nonparametric time-dependent transmission rates by projecting onto a finite subspace spanned by Legendre polynomials. This approach enables us to effectively forecast future incidence cases, the clear advantage over recovering the transmission rate at finitely many grid points within the interval where the data are currently available. In our approach, we compare three regularization algorithms: variational (Tikhonov's) regularization, truncated singular value decomposition (TSVD), and modified TSVD in order to determine the stabilizing strategy that is most effective in terms of reliability of forecasting from limited data. We illustrate our methodology using simulated data as well as case incidence data for various epidemics including the 1918 influenza pandemic in San Francisco and the 2014-2015 Ebola epidemic in West Africa.

  19. Modeling Defects, Shape Evolution, and Programmed Auto-origami in Liquid Crystal Elastomers

    NASA Astrophysics Data System (ADS)

    Konya, Andrew; Gimenez-Pinto, Vianney; Selinger, Robin

    2016-06-01

    Liquid crystal elastomers represent a novel class of programmable shape-transforming materials whose shape change trajectory is encoded in the material’s nematic director field. Using three-dimensional nonlinear finite element elastodynamics simulation, we model a variety of different actuation geometries and device designs: thin films containing topological defects, patterns that induce formation of folds and twists, and a bas-relief structure. The inclusion of finite bending energy in the simulation model reveals features of actuation trajectory that may be absent when bending energy is neglected. We examine geometries with a director pattern uniform through the film thickness encoding multiple regions of positive Gaussian curvature. Simulations indicate that heating such a system uniformly produces a disordered state with curved regions emerging randomly in both directions due to the film’s up/down symmetry. By contrast, applying a thermal gradient by heating the material first on one side breaks up/down symmetry and results in a deterministic trajectory producing a more ordered final shape. We demonstrate that a folding zone design containing cut-out areas accommodates transverse displacements without warping or buckling; and demonstrate that bas-relief and more complex bent/twisted structures can be assembled by combining simple design motifs.

  20. Stochastic density functional theory at finite temperatures

    NASA Astrophysics Data System (ADS)

    Cytter, Yael; Rabani, Eran; Neuhauser, Daniel; Baer, Roi

    2018-03-01

    Simulations in the warm dense matter regime using finite temperature Kohn-Sham density functional theory (FT-KS-DFT), while frequently used, are computationally expensive due to the partial occupation of a very large number of high-energy KS eigenstates which are obtained from subspace diagonalization. We have developed a stochastic method for applying FT-KS-DFT, that overcomes the bottleneck of calculating the occupied KS orbitals by directly obtaining the density from the KS Hamiltonian. The proposed algorithm scales as O (" close=")N3T3)">N T-1 and is compared with the high-temperature limit scaling O

  1. A WENO-solver combined with adaptive momentum discretization for the Wigner transport equation and its application to resonant tunneling diodes

    PubMed Central

    Dorda, Antonius; Schürrer, Ferdinand

    2015-01-01

    We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of the phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations. PMID:25892748

  2. A WENO-solver combined with adaptive momentum discretization for the Wigner transport equation and its application to resonant tunneling diodes.

    PubMed

    Dorda, Antonius; Schürrer, Ferdinand

    2015-03-01

    We present a novel numerical scheme for the deterministic solution of the Wigner transport equation, especially suited to deal with situations in which strong quantum effects are present. The unique feature of the algorithm is the expansion of the Wigner function in local basis functions, similar to finite element or finite volume methods. This procedure yields a discretization of the pseudo-differential operator that conserves the particle density on arbitrarily chosen grids. The high flexibility in refining the grid spacing together with the weighted essentially non-oscillatory (WENO) scheme for the advection term allows for an accurate and well-resolved simulation of the phase space dynamics. A resonant tunneling diode is considered as test case and a detailed convergence study is given by comparing the results to a non-equilibrium Green's functions calculation. The impact of the considered domain size and of the grid spacing is analyzed. The obtained convergence of the results towards a quasi-exact agreement of the steady state Wigner and Green's functions computations demonstrates the accuracy of the scheme, as well as the high flexibility to adjust to different physical situations.

  3. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  4. Probabilistic arithmetic automata and their applications.

    PubMed

    Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven

    2012-01-01

    We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.

  5. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  6. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.

    PubMed

    Zelyak, O; Fallone, B G; St-Aubin, J

    2017-12-14

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  7. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-01-01

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  8. Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".

    PubMed

    Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel

    2018-03-12

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.

  9. What controls the maximum magnitude of injection-induced earthquakes?

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.

  10. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  11. A constant stress-drop model for producing broadband synthetic seismograms: Comparison with the next generation attenuation relations

    USGS Publications Warehouse

    Frankel, A.

    2009-01-01

    Broadband (0.1-20 Hz) synthetic seismograms for finite-fault sources were produced for a model where stress drop is constant with seismic moment to see if they can match the magnitude dependence and distance decay of response spectral amplitudes found in the Next Generation Attenuation (NGA) relations recently developed from strong-motion data of crustal earthquakes in tectonically active regions. The broadband synthetics were constructed for earthquakes of M 5.5, 6.5, and 7.5 by combining deterministic synthetics for plane-layered models at low frequencies with stochastic synthetics at high frequencies. The stochastic portion used a source model where the Brune stress drop of 100 bars is constant with seismic moment. The deterministic synthetics were calculated using an average slip velocity, and hence, dynamic stress drop, on the fault that is uniform with magnitude. One novel aspect of this procedure is that the transition frequency between the deterministic and stochastic portions varied with magnitude, so that the transition frequency is inversely related to the rise time of slip on the fault. The spectral accelerations at 0.2, 1.0, and 3.0 sec periods from the synthetics generally agreed with those from the set of NGA relations for M 5.5-7.5 for distances of 2-100 km. At distances of 100-200 km some of the NGA relations for 0.2 sec spectral acceleration were substantially larger than the values of the synthetics for M 7.5 and M 6.5 earthquakes because these relations do not have a term accounting for Q. At 3 and 5 sec periods, the synthetics for M 7.5 earthquakes generally had larger spectral accelerations than the NGA relations, although there was large scatter in the results from the synthetics. The synthetics showed a sag in response spectra at close-in distances for M 5.5 between 0.3 and 0.7 sec that is not predicted from the NGA relations.

  12. Parity bifurcations in trapped multistable phase locked exciton-polariton condensates

    NASA Astrophysics Data System (ADS)

    Tan, E. Z.; Sigurdsson, H.; Liew, T. C. H.

    2018-02-01

    We present a theoretical scheme for multistability in planar microcavity exciton-polariton condensates under nonresonant driving. Using an excitation profile resulting in a spatially patterned condensate, we observe organized phase locking which can abruptly reorganize as a result of pump induced instability made possible by nonlinear interactions. For π /2 symmetric systems this reorganization can be regarded as a parity transition and is found to be a fingerprint of multistable regimes existing over a finite range of excitation strengths. The natural degeneracy of the planar equations of motion gives rise to parity bifurcation points where the condensate, as a function of excitation intensity, bifurcates into one of two anisotropic degenerate solutions. Deterministic transitions between multistable states are made possible using controlled nonresonant pulses, perturbing the solution from one attractor to another.

  13. The influence of finite cavities on the sound insulation of double-plate structures.

    PubMed

    Brunskog, Jonas

    2005-06-01

    Lightweight walls are often designed as frameworks of studs with plates on each side--a double-plate structure. The studs constitute boundaries for the cavities, thereby both affecting the sound transmission directly by short-circuiting the plates, and indirectly by disturbing the sound field between the plates. The paper presents a deterministic prediction model for airborne sound insulation including both effects of the studs. A spatial transform technique is used, taking advantage of the periodicity. The acoustic field inside the cavities is expanded by means of cosine-series. The transmission coefficient (angle-dependent and diffuse) and transmission loss are studied. Numerical examples are presented and comparisons with measurement are performed. The result indicates that a reasonably good agreement between theory and measurement can be achieved.

  14. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  15. Bounds on the entanglement entropy of droplet states in the XXZ spin chain

    NASA Astrophysics Data System (ADS)

    Beaud, V.; Warzel, S.

    2018-01-01

    We consider a class of one-dimensional quantum spin systems on the finite lattice Λ ⊂Z , related to the XXZ spin chain in its Ising phase. It includes in particular the so-called droplet Hamiltonian. The entanglement entropy of energetically low-lying states over a bipartition Λ = B ∪ Bc is investigated and proven to satisfy a logarithmic bound in terms of min{n, |B|, |Bc|}, where n denotes the maximal number of down spins in the considered state. Upon addition of any (positive) random potential, the bound becomes uniformly constant on average, thereby establishing an area law. The proof is based on spectral methods: a deterministic bound on the local (many-body integrated) density of states is derived from an energetically motivated Combes-Thomas estimate.

  16. Sonic boom interaction with turbulence

    NASA Technical Reports Server (NTRS)

    Rusak, Zvi; Giddings, Thomas E.

    1994-01-01

    A recently developed transonic small-disturbance model is used to analyze the interactions of random disturbances with a weak shock. The model equation has an extended form of the classic small-disturbance equation for unsteady transonic aerodynamics. It shows that diffraction effects, nonlinear steepening effects, focusing and caustic effects and random induced vorticity fluctuations interact simultaneously to determine the development of the shock wave in space and time and the pressure field behind it. A finite-difference algorithm to solve the mixed-type elliptic hyperbolic flows around the shock wave is presented. Numerical calculations of shock wave interactions with various deterministic vorticity and temperature disturbances result in complicate shock wave structures and describe peaked as well as rounded pressure signatures behind the shock front, as were recorded in experiments of sonic booms running through atmospheric turbulence.

  17. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  18. The exit-time problem for a Markov jump process

    NASA Astrophysics Data System (ADS)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  19. AUTO: An Automation Simulator.

    ERIC Educational Resources Information Center

    Gold, Bennett Alan

    In order to devise an aid for the teaching of formal languages and automata theory, a system was developed which allows a student to design, test, and change automata in an interactive manner. This process permits the user to observe the step-by-step operation of a defined automaton and to correct or alter its operation. Thus, the need for lengthy…

  20. On the combined gradient-stochastic plasticity model: Application to Mo-micropillar compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstantinidis, A. A., E-mail: akonsta@civil.auth.gr; Zhang, X., E-mail: zhangxu26@126.com; Aifantis, E. C., E-mail: mom@mom.gen.auth.gr

    2015-02-17

    A formulation for addressing heterogeneous material deformation is proposed. It is based on the use of a stochasticity-enhanced gradient plasticity model implemented through a cellular automaton. The specific application is on Mo-micropillar compression, for which the irregularities of the strain bursts observed have been experimentally measured and theoretically interpreted through Tsallis' q-statistics.

  1. A quantum Samaritan’s dilemma cellular automaton

    PubMed Central

    Situ, Haozhen

    2017-01-01

    The dynamics of a spatial quantum formulation of the iterated Samaritan’s dilemma game with variable entangling is studied in this work. The game is played in the cellular automata manner, i.e. with local and synchronous interaction. The game is assessed in fair and unfair contests, in noiseless scenarios and with disrupting quantum noise. PMID:28680654

  2. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  3. FAST TRACK COMMUNICATION Solving the ultradiscrete KdV equation

    NASA Astrophysics Data System (ADS)

    Willox, Ralph; Nakata, Yoichi; Satsuma, Junkichi; Ramani, Alfred; Grammaticos, Basile

    2010-12-01

    We show that a generalized cellular automaton, exhibiting solitonic interactions, can be explicitly solved by means of techniques first introduced in the context of the scattering problem for the KdV equation. We apply this method to calculate the phase-shifts caused by interactions between the solitonic and non-solitonic parts into which arbitrary initial states separate in time.

  4. Application of a hierarchical structure stochastic learning automation

    NASA Technical Reports Server (NTRS)

    Neville, R. G.; Chrystall, M. S.; Mars, P.

    1979-01-01

    A hierarchical structure automaton was developed using a two state stochastic learning automato (SLA) in a time shared model. Application of the hierarchical SLA to systems with multidimensional, multimodal performance criteria is described. Results of experiments performed with the hierarchical SLA using a performance index with a superimposed noise component of ? or - delta distributed uniformly over the surface are discussed.

  5. A cellular automaton implementation of a quantum battle of the sexes game with imperfect information

    NASA Astrophysics Data System (ADS)

    Alonso-Sanz, Ramón

    2015-10-01

    The dynamics of a spatial quantum formulation of the iterated battle of the sexes game with imperfect information is studied in this work. The game is played with variable entangling in a cellular automata manner, i.e. with local and synchronous interaction. The effect of spatial structure is assessed in fair and unfair scenarios.

  6. The Game of Life Rules on Penrose Tilings: Still Life and Oscillators

    NASA Astrophysics Data System (ADS)

    Owens, Nick; Stepney, Susan

    John Horton Conway's Game of Life is a simple two-dimensional, two state cellular automaton (CA), remarkable for its complex behaviour. That behaviour is known to be very sensitive to a change in the CA rules. Here we continue our investigations into its sensitivity to changes in the lattice, by the use of an aperiodic Penrose tiling lattice.

  7. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  8. Complex traffic flow that allows as well as hampers lane-changing intrinsically contains social-dilemma structures

    NASA Astrophysics Data System (ADS)

    Iwamura, Yoshiro; Tanimoto, Jun

    2018-02-01

    To investigate an interesting question as to whether or not social dilemma structures can be found in a realistic traffic flow reproduced by a model, we built a new microscopic model in which an intentional driver may try lane-changing to go in front of other vehicles and may hamper others’ lane-changes. Our model consists of twofold parts; cellular automaton emulating a real traffic flow and evolutionary game theory to implement a driver’s decision making-process. Numerical results reveal that a social dilemma like the multi-player chicken game or prisoner’s dilemma game emerges depending on the traffic phase. This finding implies that a social dilemma, which has been investigated by applied mathematics so far, hides behind a traffic flow, which has been explored by fluid dynamics. Highlight - Complex system of traffic flow with consideration of driver’s decision making process is concerned. - A new model dovetailing cellular automaton with game theory is established. - Statistical result from numerical simulations reveals a social dilemma structure underlying traffic flow. - The social dilemma is triggered by a driver’s egocentric actions of lane-changing and hampering other’s lane-change.

  9. Four-phase or two-phase signal plan? A study on four-leg intersection by cellular automaton simulations

    NASA Astrophysics Data System (ADS)

    Jin, Cheng-Jie; Wang, Wei; Jiang, Rui

    2016-08-01

    The proper setting of traffic signals at signalized intersections is one of the most important tasks in traffic control and management. This paper has evaluated the four-phase traffic signal plans at a four-leg intersection via cellular automaton simulations. Each leg consists of three lanes, an exclusive left-turn lane, a through lane, and a through/right-turn lane. For a comparison, we also evaluate the two-phase signal plan. The diagram of the intersection states in the space of inflow rate versus turning ratio has been presented, which exhibits four regions: In region I/II/III, congestion will propagate upstream and laterally and result in queue spillover with both signal plans/two-phase signal plan/four-phase signal plan, respectively. Therefore, neither signal plan works in region I, and only the four-phase signal plan/two-phase signal plan works in region II/III. In region IV, both signal plans work, but two-phase signal plan performs better in terms of average delays of vehicles. Finally, we study the diagram of the intersection states and average delays in the asymmetrical configurations.

  10. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  11. A new cellular automaton for signal controlled traffic flow based on driving behaviors

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Chen, Yan-Yan

    2015-03-01

    The complexity of signal controlled traffic largely stems from the various driving behaviors developed in response to the traffic signal. However, the existing models take a few driving behaviors into account and consequently the traffic dynamics has not been completely explored. Therefore, a new cellular automaton model, which incorporates the driving behaviors typically manifesting during the different stages when the vehicles are moving toward a traffic light, is proposed in this paper. Numerical simulations have demonstrated that the proposed model can produce the spontaneous traffic breakdown and the dissolution of the over-saturated traffic phenomena. Furthermore, the simulation results indicate that the slow-to-start behavior and the inch-forward behavior can foster the traffic breakdown. Particularly, it has been discovered that the over-saturated traffic can be revised to be an under-saturated state when the slow-down behavior is activated after the spontaneous breakdown. Finally, the contributions of the driving behaviors on the traffic breakdown have been examined. Project supported by the National Basic Research Program of China (Grand No. 2012CB723303) and the Beijing Committee of Science and Technology, China (Grand No. Z1211000003120100).

  12. The transition between immune and disease states in a cellular automaton model of clonal immune response

    NASA Astrophysics Data System (ADS)

    Bezzi, Michele; Celada, Franco; Ruffo, Stefano; Seiden, Philip E.

    1997-02-01

    In this paper we extend the Celada-Seiden (CS) model of the humoral immune response to include infections virus and killer T cells (cellular response). The model represents molecules and cells with bitstrings. The response of the system to virus involves a competition between the ability of the virus to kill the host cells and the host's ability to eliminate the virus. We find two basins of attraction in the dynamics of this system, one is identified with disease and the other with the immune state. There is also an oscillating state that exists on the border of these two stable states. Fluctuations in the population of virus or antibody can end the oscillation and drive the system into one of the stable states. The introduction of mechanisms of cross-regulation between the two responses can bias the system towards one of them. We also study a mean field model, based on coupled maps, to investigate virus-like infections. This simple model reproduces the attractors for average populations observed in the cellular automaton. All the dynamical behavior connected to spatial extension is lost, as is the oscillating feature. Thus the mean field approximation introduced with coupled maps destroys oscillations.

  13. Formal verification of software-based medical devices considering medical guidelines.

    PubMed

    Daw, Zamira; Cleaveland, Rance; Vetter, Marcus

    2014-01-01

    Software-based devices have increasingly become an important part of several clinical scenarios. Due to their critical impact on human life, medical devices have very strict safety requirements. It is therefore necessary to apply verification methods to ensure that the safety requirements are met. Verification of software-based devices is commonly limited to the verification of their internal elements without considering the interaction that these elements have with other devices as well as the application environment in which they are used. Medical guidelines define clinical procedures, which contain the necessary information to completely verify medical devices. The objective of this work was to incorporate medical guidelines into the verification process in order to increase the reliability of the software-based medical devices. Medical devices are developed using the model-driven method deterministic models for signal processing of embedded systems (DMOSES). This method uses unified modeling language (UML) models as a basis for the development of medical devices. The UML activity diagram is used to describe medical guidelines as workflows. The functionality of the medical devices is abstracted as a set of actions that is modeled within these workflows. In this paper, the UML models are verified using the UPPAAL model-checker. For this purpose, a formalization approach for the UML models using timed automaton (TA) is presented. A set of requirements is verified by the proposed approach for the navigation-guided biopsy. This shows the capability for identifying errors or optimization points both in the workflow and in the system design of the navigation device. In addition to the above, an open source eclipse plug-in was developed for the automated transformation of UML models into TA models that are automatically verified using UPPAAL. The proposed method enables developers to model medical devices and their clinical environment using clinical workflows as one UML diagram. Additionally, the system design can be formally verified automatically.

  14. Reconstruction of DNA sequences using genetic algorithms and cellular automata: towards mutation prediction?

    PubMed

    Mizas, Ch; Sirakoulis, G Ch; Mardiris, V; Karafyllidis, I; Glykos, N; Sandaltzopoulos, R

    2008-04-01

    Change of DNA sequence that fuels evolution is, to a certain extent, a deterministic process because mutagenesis does not occur in an absolutely random manner. So far, it has not been possible to decipher the rules that govern DNA sequence evolution due to the extreme complexity of the entire process. In our attempt to approach this issue we focus solely on the mechanisms of mutagenesis and deliberately disregard the role of natural selection. Hence, in this analysis, evolution refers to the accumulation of genetic alterations that originate from mutations and are transmitted through generations without being subjected to natural selection. We have developed a software tool that allows modelling of a DNA sequence as a one-dimensional cellular automaton (CA) with four states per cell which correspond to the four DNA bases, i.e. A, C, T and G. The four states are represented by numbers of the quaternary number system. Moreover, we have developed genetic algorithms (GAs) in order to determine the rules of CA evolution that simulate the DNA evolution process. Linear evolution rules were considered and square matrices were used to represent them. If DNA sequences of different evolution steps are available, our approach allows the determination of the underlying evolution rule(s). Conversely, once the evolution rules are deciphered, our tool may reconstruct the DNA sequence in any previous evolution step for which the exact sequence information was unknown. The developed tool may be used to test various parameters that could influence evolution. We describe a paradigm relying on the assumption that mutagenesis is governed by a near-neighbour-dependent mechanism. Based on the satisfactory performance of our system in the deliberately simplified example, we propose that our approach could offer a starting point for future attempts to understand the mechanisms that govern evolution. The developed software is open-source and has a user-friendly graphical input interface.

  15. Modeling bed load transport and step-pool morphology with a reduced-complexity approach

    NASA Astrophysics Data System (ADS)

    Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo

    2016-04-01

    Steep mountain channels are complex fluvial systems, where classical methods developed for lowland streams fail to capture the dynamics of sediment transport and bed morphology. Estimations of sediment transport based on average conditions have more than one order of magnitude of uncertainty because of the wide grain-size distribution of the bed material, the small relative submergence of coarse grains, the episodic character of sediment supply, and the complex boundary conditions. Most notably, bed load transport is modulated by the structure of the bed, where grains are imbricated in steps and similar bedforms and, therefore, they are much more stable then predicted. In this work we propose a new model based on a reduced-complexity (RC) approach focused on the reproduction of the step-pool morphology. In our 2-D cellular-automaton model entrainment, transport and deposition of particles are considered via intuitive rules based on physical principles. A parsimonious set of parameters allows the control of the behavior of the system, and the basic processes can be considered in a deterministic or stochastic way. The probability of entrainment of grains (and, as a consequence, particle travel distances and resting times) is a function of flow conditions and bed topography. Sediment input is fed at the upper boundary of the channel at a constant or variable rate. Our model yields realistic results in terms of longitudinal bed profiles and sediment transport trends. Phases of aggradation and degradation can be observed in the channel even under a constant input and the memory of the morphology can be quantified with long-range persistence indicators. Sediment yield at the channel outlet shows intermittency as observed in natural streams. Steps are self-formed in the channel and their stability is tested against the model parameters. Our results show the potential of RC models as complementary tools to more sophisticated models. They provide a realistic description of complex morphological systems and help to better identify the key physical principles that rule their dynamics.

  16. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  17. Quantum cellular automata and free quantum field theory

    NASA Astrophysics Data System (ADS)

    D'Ariano, Giacomo Mauro; Perinotti, Paolo

    2017-02-01

    In a series of recent papers [1-4] it has been shown how free quantum field theory can be derived without using mechanical primitives (including space-time, special relativity, quantization rules, etc.), but only considering the easiest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the simple principles of unitarity, homogeneity, locality, and isotropy. This has opened the route to extending the axiomatic information-theoretic derivation of the quantum theory of abstract systems [5, 6] to include quantum field theory. The inherent discrete nature of the informational axiomatization leads to an extension of quantum field theory to a quantum cellular automata theory, where the usual field theory is recovered in a regime where the discrete structure of the automata cannot be probed. A simple heuristic argument sets the scale of discreteness to the Planck scale, and the customary physical regime where discreteness is not visible is the relativistic one of small wavevectors. In this paper we provide a thorough derivation from principles that in the most general case the graph of the quantum cellular automaton is the Cayley graph of a finitely presented group, and showing how for the case corresponding to Euclidean emergent space (where the group resorts to an Abelian one) the automata leads to Weyl, Dirac and Maxwell field dynamics in the relativistic limit. We conclude with some perspectives towards the more general scenario of non-linear automata for interacting quantum field theory.

  18. Associative memory in an analog iterated-map neural network

    NASA Astrophysics Data System (ADS)

    Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.

    1990-03-01

    The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.

  19. The exit-time problem for a Markov jump process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burch, N.; D'Elia, Marta; Lehoucq, Richard B.

    2014-12-15

    The purpose of our paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developedmore » nonlocal vector calculus. Furthermore, this calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.« less

  20. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  1. Modeling of edge effect in subaperture tool influence functions of computer controlled optical surfacing.

    PubMed

    Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min

    2016-12-20

    Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.

  2. Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads

    NASA Technical Reports Server (NTRS)

    Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)

    2002-01-01

    Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.

  3. δ-exceedance records and random adaptive walks

    NASA Astrophysics Data System (ADS)

    Park, Su-Chan; Krug, Joachim

    2016-08-01

    We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.

  4. Brownian dynamics without Green's functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delong, Steven; Donev, Aleksandar, E-mail: donev@courant.nyu.edu; Usabiaga, Florencio Balboa

    2014-04-07

    We develop a Fluctuating Immersed Boundary (FIB) method for performing Brownian dynamics simulations of confined particle suspensions. Unlike traditional methods which employ analytical Green's functions for Stokes flow in the confined geometry, the FIB method uses a fluctuating finite-volume Stokes solver to generate the action of the response functions “on the fly.” Importantly, we demonstrate that both the deterministic terms necessary to capture the hydrodynamic interactions among the suspended particles, as well as the stochastic terms necessary to generate the hydrodynamically correlated Brownian motion, can be generated by solving the steady Stokes equations numerically only once per time step. Thismore » is accomplished by including a stochastic contribution to the stress tensor in the fluid equations consistent with fluctuating hydrodynamics. We develop novel temporal integrators that account for the multiplicative nature of the noise in the equations of Brownian dynamics and the strong dependence of the mobility on the configuration for confined systems. Notably, we propose a random finite difference approach to approximating the stochastic drift proportional to the divergence of the configuration-dependent mobility matrix. Through comparisons with analytical and existing computational results, we numerically demonstrate the ability of the FIB method to accurately capture both the static (equilibrium) and dynamic properties of interacting particles in flow.« less

  5. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  6. Multi-dimensional Fokker-Planck equation analysis using the modified finite element method

    NASA Astrophysics Data System (ADS)

    Náprstek, J.; Král, R.

    2016-09-01

    The Fokker-Planck equation (FPE) is a frequently used tool for the solution of cross probability density function (PDF) of a dynamic system response excited by a vector of random processes. FEM represents a very effective solution possibility, particularly when transition processes are investigated or a more detailed solution is needed. Actual papers deal with single degree of freedom (SDOF) systems only. So the respective FPE includes two independent space variables only. Stepping over this limit into MDOF systems a number of specific problems related to a true multi-dimensionality must be overcome. Unlike earlier studies, multi-dimensional simplex elements in any arbitrary dimension should be deployed and rectangular (multi-brick) elements abandoned. Simple closed formulae of integration in multi-dimension domain have been derived. Another specific problem represents the generation of multi-dimensional finite element mesh. Assembling of system global matrices should be subjected to newly composed algorithms due to multi-dimensionality. The system matrices are quite full and no advantages following from their sparse character can be profited from, as is commonly used in conventional FEM applications in 2D/3D problems. After verification of partial algorithms, an illustrative example dealing with a 2DOF non-linear aeroelastic system in combination with random and deterministic excitations is discussed.

  7. An Investigation of Energy Transmission Due to Flexural Wave Propagation in Lightweight, Built-Up Structures. Thesis

    NASA Technical Reports Server (NTRS)

    Mickol, John Douglas; Bernhard, R. J.

    1986-01-01

    A technique to measure flexural structure-borne noise intensity is investigated. Two accelerometers serve as transducers in this cross-spectral technique. The structure-borne sound power is obtained by two different techniques and compared. In the first method, a contour integral of intensity is performed from the values provided by the two-accelerometer intensity technique. In the second method, input power is calculated directly from the output of force and acceleration transducers. A plate and two beams were the subjects of the sound power comparisons. Excitation for the structures was either band-limited white noise or a deterministic signal similar to a swept sine. The two-accelerometer method was found to be sharply limited by near field and transducer spacing limitations. In addition, for the lightweight structures investigated, it was found that the probe inertia can have a significant influence on the power input to the structure. In addition to the experimental investigation of structure-borne sound energy, an extensive study of the point harmonically forced, point-damped beam boundary value problem was performed to gain insight into measurements of this nature. The intensity formulations were also incorporated into the finite element method. Intensity mappings were obtained analytically via finite element modeling of simple structures.

  8. Fast cooling for a system of stochastic oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yongxin, E-mail: chen2468@umn.edu; Georgiou, Tryphon T., E-mail: tryphon@umn.edu; Pavon, Michele, E-mail: pavon@math.unipd.it

    2015-11-15

    We study feedback control of coupled nonlinear stochastic oscillators in a force field. We first consider the problem of asymptotically driving the system to a desired steady state corresponding to reduced thermal noise. Among the feedback controls achieving the desired asymptotic transfer, we find that the most efficient one from an energy point of view is characterized by time-reversibility. We also extend the theory of Schrödinger bridges to this model, thereby steering the system in finite time and with minimum effort to a target steady-state distribution. The system can then be maintained in this state through the optimal steady-state feedbackmore » control. The solution, in the finite-horizon case, involves a space-time harmonic function φ, and −logφ plays the role of an artificial, time-varying potential in which the desired evolution occurs. This framework appears extremely general and flexible and can be viewed as a considerable generalization of existing active control strategies such as macromolecular cooling. In the case of a quadratic potential, the results assume a form particularly attractive from the algorithmic viewpoint as the optimal control can be computed via deterministic matricial differential equations. An example involving inertial particles illustrates both transient and steady state optimal feedback control.« less

  9. Probabilistic homogenization of random composite with ellipsoidal particle reinforcement by the iterative stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Sokołowski, Damian; Kamiński, Marcin

    2018-01-01

    This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).

  10. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  11. Autonomous and Connected Vehicles: A Law Enforcement Primer

    DTIC Science & Technology

    2015-12-01

    CYBERSECURITY FOR AUTOMOBILES Intelligent Transportation Systems (ITS) that are emerging around the globe achieve that classification based on the convergence...Car Works,” October 18, 2011, IEEE Spectrum, http://spectrum.ieee.org/automaton/robotics/ artificial - intelligence /how-google-self-driving-car-works...whereby artificial intelligence acts on behalf of a human, but carries the same life or death consequences.435 States should encourage and engage in

  12. On the effect of memory in a quantum prisoner's dilemma cellular automaton

    NASA Astrophysics Data System (ADS)

    Alonso-Sanz, Ramón; Revuelta, Fabio

    2018-03-01

    The disrupting effect of quantum memory on the dynamics of a spatial quantum formulation of the iterated prisoner's dilemma game with variable entangling is studied. The game is played within a cellular automata framework, i.e., with local and synchronous interactions. The main findings of this work refer to the shrinking effect of memory on the disruption induced by noise.

  13. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  14. On Patterns in Affective Media

    NASA Astrophysics Data System (ADS)

    ADAMATZKY, ANDREW

    In computational experiments with cellular automaton models of affective solutions, where chemical species represent happiness, anger, fear, confusion and sadness, we study phenomena of space time dynamic of emotions. We demonstrate feasibility of the affective solution paradigm in example of emotional abuse therapy. Results outlined in the present paper offer unconventional but promising technique to design, analyze and interpret spatio-temporal dynamic of mass moods in crowds.

  15. Simulating flaring events in complex active regions driven by observed magnetograms

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.

    2011-05-01

    Context. We interpret solar flares as events originating in active regions that have reached the self organized critical state, by using a refined cellular automaton model with initial conditions derived from observations. Aims: We investigate whether the system, with its imposed physical elements, reaches a self organized critical state and whether well-known statistical properties of flares, such as scaling laws observed in the distribution functions of characteristic parameters, are reproduced after this state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy and event duration follow the expected scaling laws, we first applied a nonlinear force-free extrapolation that reconstructs the three-dimensional magnetic fields from two-dimensional vector magnetograms. We then locate magnetic discontinuities exceeding a threshold in the Laplacian of the magnetic field. These discontinuities are relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent loading and relaxation steps lead the system to self organized criticality, after which the statistical properties of the simulated events are examined. Physical requirements, such as the divergence-free condition for the magnetic field vector, are approximately imposed on all elements of the model. Results: Our results show that self organized criticality is indeed reached when applying specific loading and relaxation rules. Power-law indices obtained from the distribution functions of the modeled flaring events are in good agreement with observations. Single power laws (peak and total flare energy) are obtained, as are power laws with exponential cutoff and double power laws (flare duration). The results are also compared with observational X-ray data from the GOES satellite for our active-region sample. Conclusions: We conclude that well-known statistical properties of flares are reproduced after the system has reached self organized criticality. A significant enhancement of our refined cellular automaton model is that it commences the simulation from observed vector magnetograms, thus facilitating energy calculation in physical units. The model described in this study remains consistent with fundamental physical requirements, and imposes physically meaningful driving and redistribution rules.

  16. Origins of Life: Open Questions and Debates

    NASA Astrophysics Data System (ADS)

    Brack, André

    2017-10-01

    Stanley Miller demonstrated in 1953 that it was possible to form amino acids from methane, ammonia, and hydrogen in water, thus launching the ambitious hope that chemists would be able to shed light on the origins of life by recreating a simple life form in a test tube. However, it must be acknowledged that the dream has not yet been accomplished, despite the great volume of effort and innovation put forward by the scientific community. A minima, primitive life can be defined as an open chemical system, fed with matter and energy, capable of self-reproduction (i.e., making more of itself by itself), and also capable of evolving. The concept of evolution implies that chemical systems would transfer their information fairly faithfully but make some random errors. If we compared the components of primitive life to parts of a chemical automaton, we could conceive that, by chance, some parts self-assembled to generate an automaton capable of assembling other parts to produce a true copy. Sometimes, minor errors in the building generated a more efficient automaton, which then became the dominant species. Quite different scenarios and routes have been followed and tested in the laboratory to explain the origin of life. There are two schools of thought in proposing the prebiotic supply of organics. The proponents of a metabolism-first call for the spontaneous formation of simple molecules from carbon dioxide and water to rapidly generate life. In a second hypothesis, the primeval soup scenario, it is proposed that rather complex organic molecules accumulated in a warm little pond prior to the emergence of life. The proponents of the primeval soup or replication first approach are by far the more active. They succeeded in reconstructing small-scale versions of proteins, membranes, and RNA. Quite different scenarios have been proposed for the inception of life: the RNA world, an origin within droplets, self-organization counteracting entropy, or a stochastic approach merging chemistry and geology. Understanding the emergence of a critical feature of life, its one-handedness, is a shared preoccupation in all these approaches.

  17. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  18. Periodically-Scheduled Controller Analysis using Hybrid Systems Reachability and Continuization

    DTIC Science & Technology

    2015-12-01

    tools to verify specifications for hybrid automata do not perform well on such periodically scheduled models. This is due to a combination of the large...an additive nondeterministic input. Reachability tools for hybrid automata can better handle such systems. We further improve the analysis by...formally as a hybrid automaton. However, reachability tools to verify specifications for hybrid automata do not perform well on such periodically

  19. Cellular automata in photonic cavity arrays.

    PubMed

    Li, Jing; Liew, T C H

    2016-10-31

    We propose theoretically a photonic Turing machine based on cellular automata in arrays of nonlinear cavities coupled with artificial gauge fields. The state of the system is recorded making use of the bistability of driven cavities, in which losses are fully compensated by an external continuous drive. The sequential update of the automaton layers is achieved automatically, by the local switching of bistable states, without requiring any additional synchronization or temporal control.

  20. Game of Life on the Equal Degree Random Lattice

    NASA Astrophysics Data System (ADS)

    Shao, Zhi-Gang; Chen, Tao

    2010-12-01

    An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.

  1. A feasibility study of stateful automaton packet inspection for streaming application detection systems

    NASA Astrophysics Data System (ADS)

    Tseng, Kuo-Kun; Lo, Jiao; Liu, Yiming; Chang, Shih-Hao; Merabti, Madjid; Ng, Felix, C. K.; Wu, C. H.

    2017-10-01

    The rapid development of the internet has brought huge benefits and social impacts; however, internet security has also become a great problem for users, since traditional approaches to packet classification cannot achieve satisfactory detection performance due to their low accuracy and efficiency. In this paper, a new stateful packet inspection method is introduced, which can be embedded in the network gateway and used by a streaming application detection system. This new detection method leverages the inexact automaton approach, using part of the header field and part of the application layer data of a packet. Based on this approach, an advanced detection system is proposed for streaming applications. The workflow of the system involves two stages: the training stage and the detection stage. In the training stage, the system initially captures characteristic patterns from a set of application packet flows. After this training is completed, the detection stage allows the user to detect the target application by capturing new application flows. This new detection approach is also evaluated using experimental analysis; the results of this analysis show that this new approach not only simplifies the management of the state detection system, but also improves the accuracy of data flow detection, making it feasible for real-world network applications.

  2. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  3. Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.

    PubMed

    Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael

    2011-10-01

    We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.

  4. A Cellular Automata-based Model for Simulating Restitution Property in a Single Heart Cell.

    PubMed

    Sabzpoushan, Seyed Hojjat; Pourhasanzade, Fateme

    2011-01-01

    Ventricular fibrillation is the cause of the most sudden mortalities. Restitution is one of the specific properties of ventricular cell. The recent findings have clearly proved the correlation between the slope of restitution curve with ventricular fibrillation. This; therefore, mandates the modeling of cellular restitution to gain high importance. A cellular automaton is a powerful tool for simulating complex phenomena in a simple language. A cellular automaton is a lattice of cells where the behavior of each cell is determined by the behavior of its neighboring cells as well as the automata rule. In this paper, a simple model is depicted for the simulation of the property of restitution in a single cardiac cell using cellular automata. At first, two state variables; action potential and recovery are introduced in the automata model. In second, automata rule is determined and then recovery variable is defined in such a way so that the restitution is developed. In order to evaluate the proposed model, the generated restitution curve in our study is compared with the restitution curves from the experimental findings of valid sources. Our findings indicate that the presented model is not only capable of simulating restitution in cardiac cell, but also possesses the capability of regulating the restitution curve.

  5. An autonomous molecular computer for logical control of gene expression.

    PubMed

    Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud

    2004-05-27

    Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems. Recently, simple molecular-scale autonomous programmable computers were demonstrated allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for 'logical' control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug.

  6. Effect of psychological tension on pedestrian counter flow via an extended cost potential field cellular automaton model

    NASA Astrophysics Data System (ADS)

    Li, Xingli; Guo, Fang; Kuang, Hua; Zhou, Huaguo

    2017-12-01

    Psychology tells us that the different level of tension may lead to different behavior variation for individuals. In this paper, an extended cost potential field cellular automaton is proposed to simulate pedestrian counter flow under an emergency by considering behavior variation of pedestrian induced by psychological tension. A quantitative formula is introduced to describe behavioral changes caused by psychological tension, which also leads to the increasing cost of discomfort. The numerical simulations are performed under the periodic boundary condition and show that the presented model can capture some essential features of pedestrian counter flow, such as lane formation and segregation phenomenon for normal condition. Furthermore, an interesting feature is found that when pedestrians are in an extremely nervous state, a stable lane formation will be broken by a disordered mixture flow. The psychological nervousness under an emergency is not always negative to moving efficiency and a moderate level of tension will delay the occurrence of jamming phase. In addition, a larger asymmetrical ratio of left walkers to right walkers will improve the critical density related to the jamming phase and retard the occurrence of completely jammed phase. These findings will be helpful in pedestrian control and management under an emergency.

  7. Stochastic switching in biology: from genotype to phenotype

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-03-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.

  8. Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation

    NASA Astrophysics Data System (ADS)

    Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence

    2017-11-01

    We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.

  9. Exact evaluation of the causal spectrum and localization properties of electronic states on a scale-free network

    NASA Astrophysics Data System (ADS)

    Xie, Pinchen; Yang, Bingjia; Zhang, Zhongzhi; Andrade, Roberto F. S.

    2018-07-01

    A deterministic network with tree structure is considered, for which the spectrum of its adjacency matrix can be exactly evaluated by a recursive renormalization approach. It amounts to successively increasing number of contributions at any finite step of construction of the tree, resulting in a causal chain. The resulting eigenvalues can be related the full energy spectrum of a nearest-neighbor tight-binding model defined on this structure. Given this association, it turns out that further properties of the eigenvectors can be evaluated, like the degree of quantum localization of the tight-binding eigenstates, expressed by the inverse participation ratio (IPR). It happens that, for the current model, the IPR's are also suitable to be analytically expressed in terms in corresponding eigenvalue chain. The resulting IPR scaling behavior is expressed by the tails of eigenvalue chains as well.

  10. The effect of the size of the system, aspect ratio and impurities concentration on the dynamic of emergent magnetic monopoles in artificial spin ice systems

    NASA Astrophysics Data System (ADS)

    León, Alejandro

    2013-08-01

    In this work we study the dynamical properties of a finite array of nanomagnets in artificial kagome spin ice at room temperature. The dynamic response of the array of nanomagnets is studied by implementing a "frustrated celular autómata" (FCA), based in the charge model and dipolar model. The FCA simulations allow us to study in real-time and deterministic way, the dynamic of the system, with minimal computational resource. The update function is defined according to the coordination number of vertices in the system. Our results show that for a set geometric parameters of the array of nanomagnets, the system exhibits high density of Dirac strings and high density emergent magnetic monopoles. A study of the effect of disorder in the arrangement of nanomagnets is incorporated in this work.

  11. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  12. Rewriting Modulo SMT

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar A.

    2013-01-01

    Combining symbolic techniques such as: (i) SMT solving, (ii) rewriting modulo theories, and (iii) model checking can enable the analysis of infinite-state systems outside the scope of each such technique. This paper proposes rewriting modulo SMT as a new technique combining the powers of (i)-(iii) and ideally suited to model and analyze infinite-state open systems; that is, systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism due to the system, and external non-determinism due to the environment. They are not amenable to finite-state model checking analysis because they typically are infinite-state. By being reducible to standard rewriting using reflective techniques, rewriting modulo SMT can both naturally model and analyze open systems without requiring any changes to rewriting-based reachability analysis techniques for closed systems. This is illustrated by the analysis of a real-time system beyond the scope of timed automata methods.

  13. Dynamics in hybrid complex systems of switches and oscillators

    NASA Astrophysics Data System (ADS)

    Taylor, Dane; Fertig, Elana J.; Restrepo, Juan G.

    2013-09-01

    While considerable progress has been made in the analysis of large systems containing a single type of coupled dynamical component (e.g., coupled oscillators or coupled switches), systems containing diverse components (e.g., both oscillators and switches) have received much less attention. We analyze large, hybrid systems of interconnected Kuramoto oscillators and Hopfield switches with positive feedback. In this system, oscillator synchronization promotes switches to turn on. In turn, when switches turn on, they enhance the synchrony of the oscillators to which they are coupled. Depending on the choice of parameters, we find theoretically coexisting stable solutions with either (i) incoherent oscillators and all switches permanently off, (ii) synchronized oscillators and all switches permanently on, or (iii) synchronized oscillators and switches that periodically alternate between the on and off states. Numerical experiments confirm these predictions. We discuss how transitions between these steady state solutions can be onset deterministically through dynamic bifurcations or spontaneously due to finite-size fluctuations.

  14. Effect of a preventive vaccine on the dynamics of HIV transmission

    NASA Astrophysics Data System (ADS)

    Gumel, A. B.; Moghadas, S. M.; Mickens, R. E.

    2004-12-01

    A deterministic mathematical model for the transmission dynamics of HIV infection in the presence of a preventive vaccine is considered. Although the equilibria of the model could not be expressed in closed form, their existence and threshold conditions for their stability are theoretically investigated. It is shown that the disease-free equilibrium is locally-asymptotically stable if the basic reproductive number R<1 (thus, HIV disease can be eradicated from the community) and unstable if R>1 (leading to the persistence of HIV within the community). A robust, positivity-preserving, non-standard finite-difference method is constructed and used to solve the model equations. In addition to showing that the anti-HIV vaccine coverage level and the vaccine-induced protection are critically important in reducing the threshold quantity R, our study predicts the minimum threshold values of vaccine coverage and efficacy levels needed to eradicate HIV from the community.

  15. Multi-objective robust design of energy-absorbing components using coupled process-performance simulations

    NASA Astrophysics Data System (ADS)

    Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud

    2014-02-01

    The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.

  16. Controlling the Topological Sector of Magnetic Solitons in Exfoliated Cr 1 / 3 NbS 2 Crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lin; Chepiga, N.; Ki, D. -K.

    Here, we investigate manifestations of topological order in monoaxial helimagnet Cr 1/3NbS 2 by performing transport measurements on ultrathin crystals. Upon sweeping the magnetic field perpendicularly to the helical axis, crystals thicker than one helix pitch (48 nm) but much thinner than the magnetic domain size (similar to 1 mu m) are found to exhibit sharp and hysteretic resistance jumps. We also show that these phenomena originate from transitions between topological sectors with a different number of magnetic solitons. This is confirmed by measurements on crystals thinner than 48 nm-in which the topological sector cannot change-that do not exhibit anymore » jump or hysteresis. These results show the ability to deterministically control the topological sector of finite-size Cr 1/3NbS 2 and to detect intersector transitions by transport measurements.« less

  17. The structure of evaporating and combusting sprays: Measurements and predictions

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.; Solomon, A. S. P.; Faeth, F. M.

    1983-01-01

    The structure of particle-laden jets and nonevaporating and evaporating sprays was measured in order to evaluate models of these processes. Three models are being evaluated: (1) a locally homogeneous flow model, where slip between the phases is neglected and the flow is assumed to be in local thermodynamic equilibrium; (2) a deterministic separated flow model, where slip and finite interphase transport rates are considered but effects of particle/drop dispersion by turbulence and effects of turbulence on interphase transport rates are ignored; and (3) a stochastic separated flow model, where effects of interphase slip, turbulent dispersion and turbulent fluctuations are considered using random sampling for turbulence properties in conjunction with random-walk computations for particle motion. All three models use a k-e-g turbulence model. All testing and data reduction are completed for the particle laden jets. Mean and fluctuating velocities of the continuous phase and mean mixture fraction were measured in the evaporating sprays.

  18. Controlling the Topological Sector of Magnetic Solitons in Exfoliated Cr 1 / 3 NbS 2 Crystals

    DOE PAGES

    Wang, Lin; Chepiga, N.; Ki, D. -K.; ...

    2017-06-23

    Here, we investigate manifestations of topological order in monoaxial helimagnet Cr 1/3NbS 2 by performing transport measurements on ultrathin crystals. Upon sweeping the magnetic field perpendicularly to the helical axis, crystals thicker than one helix pitch (48 nm) but much thinner than the magnetic domain size (similar to 1 mu m) are found to exhibit sharp and hysteretic resistance jumps. We also show that these phenomena originate from transitions between topological sectors with a different number of magnetic solitons. This is confirmed by measurements on crystals thinner than 48 nm-in which the topological sector cannot change-that do not exhibit anymore » jump or hysteresis. These results show the ability to deterministically control the topological sector of finite-size Cr 1/3NbS 2 and to detect intersector transitions by transport measurements.« less

  19. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  20. Assortative mating without assortative preference

    PubMed Central

    Xie, Yu; Cheng, Siwei; Zhou, Xiang

    2015-01-01

    Assortative mating—marriage of a man and a woman with similar social characteristics—is a commonly observed phenomenon. In the existing literature in both sociology and economics, this phenomenon has mainly been attributed to individuals’ conscious preferences for assortative mating. In this paper, we show that patterns of assortative mating may arise from another structural source even if individuals do not have assortative preferences or possess complementary attributes: dynamic processes of marriages in a closed system. For a given cohort of youth in a finite population, as the percentage of married persons increases, unmarried persons who newly enter marriage are systematically different from those who married earlier, giving rise to the phenomenon of assortative mating. We use microsimulation methods to illustrate this dynamic process, using first the conventional deterministic Gale–Shapley model, then a probabilistic Gale–Shapley model, and then two versions of the encounter mating model. PMID:25918366

  1. Analysis of credit linked demand in an inventory model with varying ordering cost.

    PubMed

    Banu, Ateka; Mondal, Shyamal Kumar

    2016-01-01

    In this paper, we have considered an economic order quantity model for deteriorating items with two-level trade credit policy in which a delay in payment is offered by a supplier to a retailer and also an another delay in payment is offered by the retailer to his/her all customers. Here, it is proposed that the demand function is dependent on the length of the customer's credit period and also the duration of offering the credit period. In this article, it is considered that the retailer's ordering cost per order depends on the number of replenishment cycles. The objective of this model is to establish a deterministic EOQ model of deteriorating items for the retailer to decide the position of customers credit period and the number of replenishment cycles in finite time horizon such that the retailer gets the maximum profit. Also, the model is explained with the help of some numerical examples.

  2. Properties of the Tent map for decimal fractions with fixed precision

    NASA Astrophysics Data System (ADS)

    Chetverikov, V. M.

    2018-01-01

    The one-dimensional discrete Tent map is a well-known example of a map whose fixed points are all unstable on the segment [0,1]. This map leads to the positivity of the Lyapunov exponent for the corresponding recurrent sequence. Therefore in a situation of general position, this sequence must demonstrate the properties of deterministic chaos. However if the first term of the recurrence sequence is taken as a decimal fraction with a fixed number “k” of digits after the decimal point and all calculations are carried out accurately, then the situation turns out to be completely different. In this case, first, the Tent map does not lead to an increase in significant digits in the terms of the sequence, and secondly, demonstrates the existence of a finite number of eventually periodic orbits, which are attractors for all other decimal numbers with the number of significant digits not exceeding “k”.

  3. Clock-Work Trade-Off Relation for Coherence in Quantum Thermodynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Hyukjoon; Jeong, Hyunseok; Jennings, David; Yadin, Benjamin; Kim, M. S.

    2018-04-01

    In thermodynamics, quantum coherences—superpositions between energy eigenstates—behave in distinctly nonclassical ways. Here we describe how thermodynamic coherence splits into two kinds—"internal" coherence that admits an energetic value in terms of thermodynamic work, and "external" coherence that does not have energetic value, but instead corresponds to the functioning of the system as a quantum clock. For the latter form of coherence, we provide dynamical constraints that relate to quantum metrology and macroscopicity, while for the former, we show that quantum states exist that have finite internal coherence yet with zero deterministic work value. Finally, under minimal thermodynamic assumptions, we establish a clock-work trade-off relation between these two types of coherences. This can be viewed as a form of time-energy conjugate relation within quantum thermodynamics that bounds the total maximum of clock and work resources for a given system.

  4. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  5. Wiener Chaos and Nonlinear Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lototsky, S.V.

    2006-11-15

    The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less

  6. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  7. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Deterministic earthquake scenario for the Basel area: Simulating strong motions and site effects for Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    OpršAl, Ivo; FäH, Donat; Mai, P. Martin; Giardini, Domenico

    2005-04-01

    The Basel earthquake of 18 October 1356 is considered one of the most serious earthquakes in Europe in recent centuries (I0 = IX, M ≈ 6.5-6.9). In this paper we present ground motion simulations for earthquake scenarios for the city of Basel and its vicinity. The numerical modeling combines the finite extent pseudodynamic and kinematic source models with complex local structure in a two-step hybrid three-dimensional (3-D) finite difference (FD) method. The synthetic seismograms are accurate in the frequency band 0-2.2 Hz. The 3-D FD is a linear explicit displacement formulation using an irregular rectangular grid including topography. The finite extent rupture model is adjacent to the free surface because the fault has been recognized through trenching on the Reinach fault. We test two source models reminiscent of past earthquakes (the 1999 Athens and the 1989 Loma Prieta earthquake) to represent Mw ≈ 5.9 and Mw ≈ 6.5 events that occur approximately to the south of Basel. To compare the effect of the same wave field arriving at the site from other directions, we considered the same sources placed east and west of the city. The local structural model is determined from the area's recently established P and S wave velocity structure and includes topography. The selected earthquake scenarios show strong ground motion amplification with respect to a bedrock site, which is in contrast to previous 2-D simulations for the same area. In particular, we found that the edge effects from the 3-D structural model depend strongly on the position of the earthquake source within the modeling domain.

  9. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  10. A deterministic and stochastic velocity model for the Salton Trough/Basin and Range transition zone and constraints on magmatism during rifting

    NASA Astrophysics Data System (ADS)

    Larkin, Steven P.; Levander, Alan; Okaya, David; Goff, John A.

    1996-12-01

    As a high resolution addition to the 1992 Pacific to Arizona Crustal Experiment (PACE), a 45-km-long deep crustal seismic reflection profile was acquired across the Chocolate Mountains in southeastern California to illuminate crustal structure in the transition between the Salton Trough and the Basin and Range province. The complex seismic data are analyzed for both large-scale (deterministic) and fine-scale (stochastic) crustal features. A low-fold near-offset common-midpoint (CMP) stacked section shows the northeastward lateral extent of a high-velocity lower crustal body which is centered beneath the Salton Trough. Off-end shots record a high-amplitude diffraction from the point where the high velocity lower crust pinches out at the Moho. Above the high-velocity lower crust, moderate-amplitude reflections occur at midcrustal levels. These reflections display the coherency and frequency characteristics of reflections backscattered from a heterogeneous velocity field, which we model as horizontal intrusions with a von Kármán (fractal) distribution. The effects of upper crustal scattering are included by combining the mapped surface geology and laboratory measurements of exposed rocks within the Chocolate Mountains to reproduce the upper crustal velocity heterogeneity in our crustal velocity model. Viscoelastic finite difference simulations indicate that the volume of mafic material within the reflective zone necessary to produce the observed backscatter is about 5%. The presence of wavelength-scale heterogeneity within the near-surface, upper, and middle crust also produces a 0.5-s-thick zone of discontinuous reflections from a crust-mantle interface which is actually a first-order discontinuity.

  11. Uniform and Multi-Grid Modeling of Acoustic Wave Propagation With Cellular Automaton Techniques

    DTIC Science & Technology

    2013-03-01

    39 Figure 26. CurrvedHillIndices fuction used to created a curved hill in the bottom...to safe passage of a submarine. Driving factors influencing SONAR improvements have alluded to the fact that primary naval missions have shifted from...CurvedHillIndices function after reaching line 12 42 Figure 26. CurrvedHillIndices fuction used to created a curved hill in the bottom of any 2D or

  12. New cellular automaton model for magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Hudong; Matthaeus, William H.

    1987-01-01

    A new type of two-dimensional cellular automation method is introduced for computation of magnetohydrodynamic fluid systems. Particle population is described by a 36-component tensor referred to a hexagonal lattice. By appropriate choice of the coefficients that control the modified streaming algorithm and the definition of the macroscopic fields, it is possible to compute both Lorentz-force and magnetic-induction effects. The method is local in the microscopic space and therefore suited to massively parallel computations.

  13. Cellular automaton formulation of passive scalar dynamics

    NASA Technical Reports Server (NTRS)

    Chen, Hudong; Matthaeus, William H.

    1987-01-01

    Cellular automata modeling of the advection of a passive scalar in a two-dimensional flow is examined in the context of discrete lattice kinetic theory. It is shown that if the passive scalar is represented by tagging or 'coloring' automation particles a passive advection-diffusion equation emerges without use of perturbation expansions. For the specific case of the hydrodynamic lattice gas model of Frisch et al. (1986), the diffusion coefficient is calculated by perturbation.

  14. Cellular Automata for Spatiotemporal Pattern Formation from Reaction-Diffusion Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Ohmori, Shousuke; Yamazaki, Yoshihiro

    2016-01-01

    Ultradiscrete equations are derived from a set of reaction-diffusion partial differential equations, and cellular automaton rules are obtained on the basis of the ultradiscrete equations. Some rules reproduce the dynamical properties of the original reaction-diffusion equations, namely, bistability and pulse annihilation. Furthermore, other rules bring about soliton-like preservation and periodic pulse generation with a pacemaker, which are not obtained from the original reaction-diffusion equations.

  15. Greasy Automatons and The Horsey Set: The U.S. Cavalry and Mechanization, 1928 - 1940

    DTIC Science & Technology

    1995-05-01

    provide. Faced with the unenviable task of holding together an institution under attack from without and torn apart within, the chiefs sacrificed the... protection . Still, the Superior Board concluded that tanks were an infantry auxiliary, incapable of independent action, and recommended that they be...the tank’s role. It involved enhancing power of combat units through doctrinal and organizational schemes that exploited the protection , firepower

  16. Human Factors Issues in the Use of Virtual and Augmented Reality for Military Purposes - USA

    DTIC Science & Technology

    2005-12-01

    and provide a means of output, MOVES has built a prototype system and continues research into the artificial intelligence and other factors required...role in any attempt to create automaton warriors. Indeed game-theoretic notions have been utilized in applications of artificial intelligence to...Review Board at the Defense Intelligence Agency (DIA). AFRL was notified that DIA will sponsor DTNG for Certification and Accreditation. Det 4 is expected

  17. Strategies for Human-Automaton Resource Entity Deployment (SHARED)

    DTIC Science & Technology

    2003-12-01

    year 2004; however, the termination of MICA will render the status of this task as incomplete. 3.4 CPPP Development SOW-II.C.3.3.2 (c) Biomimicry of...Social Foraging for Cooperative Search/Engagement. Statement: The following aspects of biomimicry of social foraging will be studied in the...are studied extensively. The focus are on biomimicry of several organisms including two kinds of bacteria (M. xanthus and E. Coli) and one kind of

  18. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models

    NASA Astrophysics Data System (ADS)

    Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.

    2013-05-01

    El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences - finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high resolution numerical modelling, being a good and fast approximation to obtain preliminary tsunami hazard estimations. In Acajutla and La Libertad, both important tourism centres being actively developed, flooding depths between 2 and 4 m are frequent, accompanied with high and very high person instability hazard. Inside the Gulf of Fonseca the impact of the waves is almost negligible.

  19. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models.

    NASA Astrophysics Data System (ADS)

    Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.

    2013-11-01

    El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high-resolution numerical modelling, being a good and fast approximation to obtain preliminary tsunami hazard estimations. In Acajutla and La Libertad, both important tourism centres being actively developed, flooding depths between 2 and 4 m are frequent, accompanied with high and very high person instability hazard. Inside the Gulf of Fonseca the impact of the waves is almost negligible.

  20. Impact of Uncertainty on the Porous Media Description in the Subsurface Transport Analysis

    NASA Astrophysics Data System (ADS)

    Darvini, G.; Salandin, P.

    2008-12-01

    In the modelling of flow and transport phenomena in naturally heterogeneous media, the spatial variability of hydraulic properties, typically the hydraulic conductivity, is generally described by use of a variogram of constant sill and spatial correlation. While some analyses reported in the literature discuss of spatial inhomogeneity related to a trend in the mean hydraulic conductivity, the effect in the flow and transport due to an inexact definition of spatial statistical properties of media as far as we know had never taken into account. The relevance of this topic is manifest, and it is related to the uncertainty in the definition of spatial moments of hydraulic log-conductivity from an (usually) little number of data, as well as to the modelling of flow and transport processes by the Monte Carlo technique, whose numerical fields have poor ergodic properties and are not strictly statistically homogeneous. In this work we investigate the effects related to mean log-conductivity (logK) field behaviours different from the constant one due to different sources of inhomogeneity as: i) a deterministic trend; ii) a deterministic sinusoidal pattern and iii) a random behaviour deriving from the hierarchical sedimentary architecture of porous formations and iv) conditioning procedure on available measurements of the hydraulic conductivity. These mean log-conductivity behaviours are superimposed to a correlated weakly fluctuating logK field. The time evolution of the spatial moments of the plume driven by a statistically inhomogeneous steady state random velocity field is analyzed in a 2-D finite domain by taking into account different sizes of injection area. The problem is approached by both a classical Monte Carlo procedure and SFEM (stochastic finite element method). By the latter the moments are achieved by space-time integration of the velocity field covariance structure derived according to the first- order Taylor series expansion. Two different goals are foreseen: 1) from the results it will be possible to distinguish the contribute in the plume dispersion of the uncertainty in the statistics of the medium hydraulic properties in all the cases considered, and 2) we will try to highlight the loss of performances that seems to affect the first-order approaches in the transport phenomena that take place in hierarchical architecture of porous formations.

  1. Deterministic figure correction of piezoelectrically adjustable slumped glass optics

    NASA Astrophysics Data System (ADS)

    DeRoo, Casey T.; Allured, Ryan; Cotroneo, Vincenzo; Hertz, Edward; Marquez, Vanessa; Reid, Paul B.; Schwartz, Eric D.; Vikhlinin, Alexey A.; Trolier-McKinstry, Susan; Walker, Julian; Jackson, Thomas N.; Liu, Tianning; Tendulkar, Mohit

    2018-01-01

    Thin x-ray optics with high angular resolution (≤ 0.5 arcsec) over a wide field of view enable the study of a number of astrophysically important topics and feature prominently in Lynx, a next-generation x-ray observatory concept currently under NASA study. In an effort to address this technology need, piezoelectrically adjustable, thin mirror segments capable of figure correction after mounting and on-orbit are under development. We report on the fabrication and characterization of an adjustable cylindrical slumped glass optic. This optic has realized 100% piezoelectric cell yield and employs lithographically patterned traces and anisotropic conductive film connections to address the piezoelectric cells. In addition, the measured responses of the piezoelectric cells are found to be in good agreement with finite-element analysis models. While the optic as manufactured is outside the range of absolute figure correction, simulated corrections using the measured responses of the piezoelectric cells are found to improve 5 to 10 arcsec mirrors to 1 to 3 arcsec [half-power diameter (HPD), single reflection at 1 keV]. Moreover, a measured relative figure change which would correct the figure of a representative slumped glass piece from 6.7 to 1.2 arcsec HPD is empirically demonstrated. We employ finite-element analysis-modeled influence functions to understand the current frequency limitations of the correction algorithm employed and identify a path toward achieving subarcsecond corrections.

  2. Studying the effect of cracks on the ultrasonic wave propagation in a two dimensional gearbox finite element model

    NASA Astrophysics Data System (ADS)

    Ozevin, Didem; Fazel, Hossein; Cox, Justin; Hardman, William; Kessler, Seth S.; Timmons, Alan

    2014-04-01

    Gearbox components of aerospace structures are typically made of brittle materials with high fracture toughness, but susceptible to fatigue failure due to continuous cyclic loading. Structural Health Monitoring (SHM) methods are used to monitor the crack growth in gearbox components. Damage detection methodologies developed in laboratory-scale experiments may not represent the actual gearbox structural configuration, and are usually not applicable to real application as the vibration and wave properties depend on the material, structural layers and thicknesses. Also, the sensor types and locations are key factors for frequency content of ultrasonic waves, which are essential features for pattern recognition algorithm development in noisy environments. Therefore, a deterministic damage detection methodology that considers all the variables influencing the waveform signature should be considered in the preliminary computation before any experimental test matrix. In order to achieve this goal, we developed two dimensional finite element models of a gearbox cross section from front view and shaft section. The cross section model consists of steel revolving teeth, a thin layer of oil, and retention plate. An ultrasonic wave up to 1 MHz frequency is generated, and waveform histories along the gearbox are recorded. The received waveforms under pristine and cracked conditions are compared in order to analyze the crack influence on the wave propagation in gearbox, which can be utilized by both active and passive SHM methods.

  3. Units of rotational information

    NASA Astrophysics Data System (ADS)

    Yang, Yuxiang; Chiribella, Giulio; Hu, Qinheping

    2017-12-01

    Entanglement in angular momentum degrees of freedom is a precious resource for quantum metrology and control. Here we study the conversions of this resource, focusing on Bell pairs of spin-J particles, where one particle is used to probe unknown rotations and the other particle is used as reference. When a large number of pairs are given, we show that every rotated spin-J Bell state can be reversibly converted into an equivalent number of rotated spin one-half Bell states, at a rate determined by the quantum Fisher information. This result provides the foundation for the definition of an elementary unit of information about rotations in space, which we call the Cartesian refbit. In the finite copy scenario, we design machines that approximately break down Bell states of higher spins into Cartesian refbits, as well as machines that approximately implement the inverse process. In addition, we establish a quantitative link between the conversion of Bell states and the simulation of unitary gates, showing that the fidelity of probabilistic state conversion provides upper and lower bounds on the fidelity of deterministic gate simulation. The result holds not only for rotation gates, but also to all sets of gates that form finite-dimensional representations of compact groups. For rotation gates, we show how rotations on a system of given spin can simulate rotations on a system of different spin.

  4. Human skeletal muscle behavior in vivo: Finite element implementation, experiment, and passive mechanical characterization.

    PubMed

    Clemen, Christof B; Benderoth, Günther E K; Schmidt, Andreas; Hübner, Frank; Vogl, Thomas J; Silber, Gerhard

    2017-01-01

    In this study, useful methods for active human skeletal muscle material parameter determination are provided. First, a straightforward approach to the implementation of a transversely isotropic hyperelastic continuum mechanical material model in an invariant formulation is presented. This procedure is found to be feasible even if the strain energy is formulated in terms of invariants other than those predetermined by the software's requirements. Next, an appropriate experimental setup for the observation of activation-dependent material behavior, corresponding data acquisition, and evaluation is given. Geometry reconstruction based on magnetic resonance imaging of different deformation states is used to generate realistic, subject-specific finite element models of the upper arm. Using the deterministic SIMPLEX optimization strategy, a convenient quasi-static passive-elastic material characterization is pursued; the results of this approach used to characterize the behavior of human biceps in vivo indicate the feasibility of the illustrated methods to identify active material parameters comprising multiple loading modes. A comparison of a contact simulation incorporating the optimized parameters to a reconstructed deformed geometry of an indented upper arm shows the validity of the obtained results regarding deformation scenarios perpendicular to the effective direction of the nonactivated biceps. However, for a valid, activatable, general-purpose material characterization, the material model needs some modifications as well as a multicriteria optimization of the force-displacement data for different loading modes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Origin and Diversification Dynamics of Self-Incompatibility Haplotypes

    PubMed Central

    Gervais, Camille E.; Castric, Vincent; Ressayre, Adrienne; Billiard, Sylvain

    2011-01-01

    Self-incompatibility (SI) is a genetic system found in some hermaphrodite plants. Recognition of pollen by pistils expressing cognate specificities at two linked genes leads to rejection of self pollen and pollen from close relatives, i.e., to avoidance of self-fertilization and inbred matings, and thus increased outcrossing. These genes generally have many alleles, yet the conditions allowing the evolution of new alleles remain mysterious. Evolutionary changes are clearly necessary in both genes, since any mutation affecting only one of them would result in a nonfunctional self-compatible haplotype. Here, we study diversification at the S-locus (i.e., a stable increase in the total number of SI haplotypes in the population, through the incorporation of new SI haplotypes), both deterministically (by investigating analytically the fate of mutations in an infinite population) and by simulations of finite populations. We show that the conditions allowing diversification are far less stringent in finite populations with recurrent mutations of the pollen and pistil genes, suggesting that diversification is possible in a panmictic population. We find that new SI haplotypes emerge fastest in populations with few SI haplotypes, and we discuss some implications for empirical data on S-alleles. However, allele numbers in our simulations never reach values as high as observed in plants whose SI systems have been studied, and we suggest extensions of our models that may reconcile the theory and data. PMID:21515570

  6. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  7. Lattice Boltzmann model for simulation of magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William

    1991-01-01

    A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.

  8. Linear System Control Using Stochastic Learning Automata

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel; Cox, E. Lucien; Chouikha, Mohamed F.

    1998-01-01

    This paper explains the use of a Stochastic Learning Automata (SLA) to control switching between three systems to produce the desired output response. The SLA learns the optimal choice of the damping ratio for each system to achieve a desired result. We show that the SLA can learn these states for the control of an unknown system with the proper choice of the error criteria. The results of using a single automaton are compared to using multiple automata.

  9. Marine traffic model based on cellular automaton: Considering the change of the ship's velocity under the influence of the weather and sea

    NASA Astrophysics Data System (ADS)

    Qi, Le; Zheng, Zhongyi; Gang, Longhui

    2017-10-01

    It was found that the ships' velocity change, which is impacted by the weather and sea, e.g., wind, sea wave, sea current, tide, etc., is significant and must be considered in the marine traffic model. Therefore, a new marine traffic model based on cellular automaton (CA) was proposed in this paper. The characteristics of the ship's velocity change are taken into account in the model. First, the acceleration of a ship was divided into two components: regular component and random component. Second, the mathematical functions and statistical distribution parameters of the two components were confirmed by spectral analysis, curve fitting and auto-correlation analysis methods. Third, by combining the two components, the acceleration was regenerated in the update rules for ships' movement. To test the performance of the model, the ship traffic flows in the Dover Strait, the Changshan Channel and the Qiongzhou Strait were studied and simulated. The results show that the characteristics of ships' velocities in the simulations are consistent with the measured data by Automatic Identification System (AIS). Although the characteristics of the traffic flow in different areas are different, the velocities of ships can be simulated correctly. It proves that the velocities of ships under the influence of weather and sea can be simulated successfully using the proposed model.

  10. Modeling the effect of microscopic driving behaviors on Kerner's time-delayed traffic breakdown at traffic signal using cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Chen, Yan-Yan

    2016-12-01

    The signalized traffic is considerably complex due to the fact that various driving behaviors have emerged to respond to traffic signals. However, the existing cellular automaton models take the signal-vehicle interactions into account inadequately, resulting in a potential risk that vehicular traffic flow dynamics may not be completely explored. To remedy this defect, this paper proposes a more realistic cellular automaton model by incorporating a number of the driving behaviors typically observed when the vehicles are approaching a traffic light. In particular, the anticipatory behavior proposed in this paper is realized with a perception factor designed by considering the vehicle speed implicitly and the gap to its preceding vehicle explicitly. Numerical simulations have been performed based on a signal controlled road which is partitioned into three sections according to the different reactions of drivers. The effects of microscopic driving behaviors on Kerner's time-delayed traffic breakdown at signal (Kerner 2011, 2013) have been investigated with the assistance of spatiotemporal pattern and trajectory analysis. Furthermore, the contributions of the driving behaviors on the traffic breakdown have been statistically examined. Finally, with the activation of the anticipatory behavior, the influences of the other driving behaviors on the formation of platoon have been investigated in terms of the number of platoons, the averaged platoon size, and the averaged flow rate.

  11. An autonomous molecular computer for logical control of gene expression

    PubMed Central

    Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud

    2013-01-01

    Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems1–7. Recently, simple molecular-scale autonomous programmable computers were demonstrated8–15 allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for ‘logical’ control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton12–17; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes18–22 associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug. PMID:15116117

  12. Using exploratory regression to identify optimal driving factors for cellular automaton modeling of land use change.

    PubMed

    Feng, Yongjiu; Tong, Xiaohua

    2017-09-22

    Defining transition rules is an important issue in cellular automaton (CA)-based land use modeling because these models incorporate highly correlated driving factors. Multicollinearity among correlated driving factors may produce negative effects that must be eliminated from the modeling. Using exploratory regression under pre-defined criteria, we identified all possible combinations of factors from the candidate factors affecting land use change. Three combinations that incorporate five driving factors meeting pre-defined criteria were assessed. With the selected combinations of factors, three logistic regression-based CA models were built to simulate dynamic land use change in Shanghai, China, from 2000 to 2015. For comparative purposes, a CA model with all candidate factors was also applied to simulate the land use change. Simulations using three CA models with multicollinearity eliminated performed better (with accuracy improvements about 3.6%) than the model incorporating all candidate factors. Our results showed that not all candidate factors are necessary for accurate CA modeling and the simulations were not sensitive to changes in statistically non-significant driving factors. We conclude that exploratory regression is an effective method to search for the optimal combinations of driving factors, leading to better land use change models that are devoid of multicollinearity. We suggest identification of dominant factors and elimination of multicollinearity before building land change models, making it possible to simulate more realistic outcomes.

  13. Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow

    NASA Astrophysics Data System (ADS)

    Kerner, Boris S.; Klenov, Sergey L.; Schreckenberg, Michael

    2011-10-01

    We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules “acceleration,” “deceleration,” “randomization,” and “motion” of the Nagel-Schreckenberg CA model as well as “overacceleration through lane changing to the faster lane,” “comparison of vehicle gap with the synchronization gap,” and “speed adaptation within the synchronization gap” of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.

  14. A refined and dynamic cellular automaton model for pedestrian-vehicle mixed traffic flow

    NASA Astrophysics Data System (ADS)

    Liu, Mianfang; Xiong, Shengwu

    2016-12-01

    Mixed traffic flow sharing the “same lane” and having no discipline on road is a common phenomenon in the developing countries. For example, motorized vehicles (m-vehicles) and nonmotorized vehicles (nm-vehicles) may share the m-vehicle lane or nm-vehicle lane and pedestrians may share the nm-vehicle lane. Simulating pedestrian-vehicle mixed traffic flow consisting of three kinds of traffic objects: m-vehicles, nm-vehicles and pedestrians, can be a challenge because there are some erratic drivers or pedestrians who fail to follow the lane disciplines. In the paper, we investigate various moving and interactive behavior associated with mixed traffic flow, such as lateral drift including illegal lane-changing and transverse crossing different lanes, overtaking and forward movement, and propose some new moving and interactive rules for pedestrian-vehicle mixed traffic flow based on a refined and dynamic cellular automaton (CA) model. Simulation results indicate that the proposed model can be used to investigate the traffic flow characteristic in a mixed traffic flow system and corresponding complicated traffic problems, such as, the moving characteristics of different traffic objects, interaction phenomenon between different traffic objects, traffic jam, traffic conflict, etc., which are consistent with the actual mixed traffic system. Therefore, the proposed model provides a solid foundation for the management, planning and evacuation of the mixed traffic flow.

  15. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  16. Vibration mode shape recognition using image processing

    NASA Astrophysics Data System (ADS)

    Wang, Weizhuo; Mottershead, John E.; Mares, Cristinel

    2009-10-01

    Currently the most widely used method for comparing mode shapes from finite elements and experimental measurements is the modal assurance criterion (MAC), which can be interpreted as the cosine of the angle between the numerical and measured eigenvectors. However, the eigenvectors only contain the displacement of discrete coordinates, so that the MAC index carries no explicit information on shape features. New techniques, based upon the well-developed philosophies of image processing (IP) and pattern recognition (PR) are considered in this paper. The Zernike moment descriptor (ZMD), Fourier descriptor (FD), and wavelet descriptor (WD) are the most popular shape descriptors due to their outstanding properties in IP and PR. These include (1) for the ZMD-rotational invariance, expression and computing efficiency, ease of reconstruction and robustness to noise; (2) for the FD—separation of the global shape and shape-details by low and high frequency components, respectively, invariance under geometric transformation; (3) for the WD—multi-scale representation and local feature detection. Once a shape descriptor has been adopted, the comparison of mode shapes is transformed to a comparison of multidimensional shape feature vectors. Deterministic and statistical methods are presented. The deterministic problem of measuring the degree of similarity between two mode shapes (possibly one from a vibration test and the other from a finite element model) may be carried out using Pearson's correlation. Similar shape feature vectors may be arranged in clusters separated by Euclidian distances in the feature space. In the statistical analysis we are typically concerned with the classification of a test mode shape according to clusters of shape feature vectors obtained from a randomised finite element model. The dimension of the statistical problem may often be reduced by principal component analysis. Then, in addition to the Euclidian distance, the Mahalanobis distance, defining the separation of the test point from the cluster in terms of its standard deviation, becomes an important measure. Bayesian decision theory may be applied to formally minimise the risk of misclassification of the test shape feature vector. In this paper the ZMD is applied to the problem of mode shape recognition for a circular plate. Results show that the ZMD has considerable advantages over the traditional MAC index when identifying the cyclically symmetric mode shapes that occur in axisymmetric structures at identical frequencies. Mode shape recognition of rectangular plates is carried out by the FD. Also, the WD is applied to the problem of recognising the mode shapes in the thin and thick regions of a plate with different thicknesses. It shows the benefit of using the WD to identify mode-shapes having both local and global components. The comparison and classification of mode shapes using IP and PR provides a 'toolkit' to complement the conventional MAC approach. The selection of a particular shape descriptor and classification method will depend upon the problem in hand and the experience of the analyst.

  17. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  18. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  19. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  20. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  1. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  2. A cellular automata model of Ebola virus dynamics

    NASA Astrophysics Data System (ADS)

    Burkhead, Emily; Hawkins, Jane

    2015-11-01

    We construct a stochastic cellular automaton (SCA) model for the spread of the Ebola virus (EBOV). We make substantial modifications to an existing SCA model used for HIV, introduced by others and studied by the authors. We give a rigorous analysis of the similarities between models due to the spread of virus and the typical immune response to it, and the differences which reflect the drastically different timing of the course of EBOV. We demonstrate output from the model and compare it with clinical data.

  3. A 3D Cellular Automaton for Cell Differentiation in a Solid Tumor with Plasticity

    NASA Astrophysics Data System (ADS)

    Margarit, David H.; Romanelli, Lilia; Fendrik, Alejandro J.

    A model with spherical symmetry is proposed. We analyze the appropriate parameters of cell differentiation for different kinds of cells (Cancer Stem Cells (CSC) and Differentiated Cells (DC)). The plasticity (capacity to return from a DC to its previous state of CSC) is taken into account. Following this hypothesis, the dissemination of CSCs to another organ is analyzed. The location of the cells in the tumor and the plasticity range for possible metastasis is discussed.

  4. The Computational Complexity of Two-Level Morphology.

    DTIC Science & Technology

    1985-11-01

    automaton component of a KIMMO system specified as in Gajek et al. (1983) ;uid ey is a string over the alphabet of the KIMMO system. An actual instance...a are m-s before, uid D is the dictionary coinpo- jaiite of a KIMMO system described as specified in Gajek t al. (1983). An actual instance of KIMMO...the smaller machines (Karttunen, 1983:176). Gajek et al. (1983) use the terms DIGGMACHINE and DIG RMACIIINE to refer to the gener- ation and recognition

  5. A quantum relativistic battle of the sexes cellular automaton

    NASA Astrophysics Data System (ADS)

    Alonso-Sanz, Ramón; Situ, Haozhen

    2017-02-01

    The effect of variable entangling on the dynamics of a spatial quantum relativistic formulation of the iterated battle of the sexes game is studied in this work. The game is played in the cellular automata manner, i.e., with local and synchronous interaction. The game is assessed in fair and unfair contests. Despite the full range of quantum parameters initially accessible, they promptly converge into fairly stable configurations, that often show rich spatial structures in simulations with no negligible entanglement.

  6. Advanced Stirling Convertor Heater Head Durability and Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Shah, Ashwin R.; Korovaichuk, Igor; Kalluri, Sreeramesh

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for long duration Science missions, such as lunar applications, Mars rovers, and deep space missions, that require reliable design lifetimes of up to 17 years. Resistance to creep deformation of the MarM-247 heater head (HH), a structurally critical component of the ASRG Advanced Stirling Convertor (ASC), under high temperatures (up to 850 C) is a key design driver for durability. Inherent uncertainties in the creep behavior of the thin-walled HH and the variations in the wall thickness, control temperature, and working gas pressure need to be accounted for in the life and reliability prediction. Due to the availability of very limited test data, assuring life and reliability of the HH is a challenging task. The NASA Glenn Research Center (GRC) has adopted an integrated approach combining available uniaxial MarM-247 material behavior testing, HH benchmark testing and advanced analysis in order to demonstrate the integrity, life and reliability of the HH under expected mission conditions. The proposed paper describes analytical aspects of the deterministic and probabilistic approaches and results. The deterministic approach involves development of the creep constitutive model for the MarM-247 (akin to the Oak Ridge National Laboratory master curve model used previously for Inconel 718 (Special Metals Corporation)) and nonlinear finite element analysis to predict the mean life. The probabilistic approach includes evaluation of the effect of design variable uncertainties in material creep behavior, geometry and operating conditions on life and reliability for the expected life. The sensitivity of the uncertainties in the design variables on the HH reliability is also quantified, and guidelines to improve reliability are discussed.

  7. Stochastic Seismic Inversion and Migration for Offshore Site Investigation in the Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Son, J.; Medina-Cetina, Z.

    2017-12-01

    We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.

  8. A Scalable Computational Framework for Establishing Long-Term Behavior of Stochastic Reaction Networks

    PubMed Central

    Khammash, Mustafa

    2014-01-01

    Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191

  9. On spurious detection of linear response and misuse of the fluctuation-dissipation theorem in finite time series

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg A.; Wormell, J. P.; Wouters, Jeroen

    2016-09-01

    Using a sensitive statistical test we determine whether or not one can detect the breakdown of linear response given observations of deterministic dynamical systems. A goodness-of-fit statistics is developed for a linear statistical model of the observations, based on results for central limit theorems for deterministic dynamical systems, and used to detect linear response breakdown. We apply the method to discrete maps which do not obey linear response and show that the successful detection of breakdown depends on the length of the time series, the magnitude of the perturbation and on the choice of the observable. We find that in order to reliably reject the assumption of linear response for typical observables sufficiently large data sets are needed. Even for simple systems such as the logistic map, one needs of the order of 106 observations to reliably detect the breakdown with a confidence level of 95 %; if less observations are available one may be falsely led to conclude that linear response theory is valid. The amount of data required is larger the smaller the applied perturbation. For judiciously chosen observables the necessary amount of data can be drastically reduced, but requires detailed a priori knowledge about the invariant measure which is typically not available for complex dynamical systems. Furthermore we explore the use of the fluctuation-dissipation theorem (FDT) in cases with limited data length or coarse-graining of observations. The FDT, if applied naively to a system without linear response, is shown to be very sensitive to the details of the sampling method, resulting in erroneous predictions of the response.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  11. Deterministic Tectonic Origin Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas

    NASA Astrophysics Data System (ADS)

    Necmioglu, O.; Meral Ozel, N.

    2014-12-01

    Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.

  12. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  13. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  14. Uncertainties in the 2004 Sumatra–Andaman source through nonlinear stochastic inversion of tsunami waves

    PubMed Central

    Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.

    2017-01-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311

  15. Making classical ground-state spin computing fault-tolerant.

    PubMed

    Crosson, I J; Bacon, D; Brown, K R

    2010-09-01

    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.

  16. Modelling the long-term fate of mercury in a lowland tidal river. I. Description of two finite segment models.

    PubMed

    Braga, M Cristina B; Birkett, Jason W; Lester, John N; Shaw, George

    2010-02-01

    Crucial determinants of the potential effects of mercury in aquatic ecosystems are the speciation, partitioning, and cycling of its various species. These processes are affected by site-specific factors, such as water chemistry, sediment transport, and hydrodynamics. This study presents two different approaches to the development of one-dimensional/dynamic-deterministic models for the evaluation and prediction of mercury contamination in a lowland tidal river, the River Yare (Norfolk, UK). The models described here were developed to encompass the entire river system and address the mass balance of mercury in a multicompartment system, including tidal reversal and saline limit. The models were focused on river systems, with the River Yare being used as a case study because previous modelling studies have been centred on lakes and wetlands whilst there is a paucity of information for rivers. Initial comparisons with actual measured water parameters (salinity and suspended solids) indicate that both models exhibit good agreement with the actual values.

  17. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  18. Uncertainties in the 2004 Sumatra-Andaman source through nonlinear stochastic inversion of tsunami waves.

    PubMed

    Gopinathan, D; Venugopal, M; Roy, D; Rajendran, K; Guillas, S; Dias, F

    2017-09-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems.

  19. Study of system-size effects on the emergent magnetic monopoles and Dirac strings in artificial kagome spin ice

    NASA Astrophysics Data System (ADS)

    Leon, Alejandro

    2012-02-01

    In this work we study the dynamical properties of a finite array of nanomagnets in artificial kagome spin ice at room temperature. The dynamic response of the array of nanomagnets is studied by implementing a ``frustrated celular aut'omata'' (FCA), based in the charge model. In this model, each dipole is replaced by a dumbbell of two opposite charges, which are situated at the neighbouring vertices of the honeycomb lattice. The FCA simulations, allow us to study in real-time and deterministic way, the dynamic of the system, with minimal computational resource. The update function is defined according to the coordination number of vertices in the system. Our results show that for a set geometric parameters of the array of nanomagnets, the system exhibits high density of Dirac strings and high density emergent magnetic monopoles. A study of the effect of disorder in the arrangement of nanomagnets is incorporated in this work.

  20. Non-monotonic temperature dependence of chaos-assisted diffusion in driven periodic systems

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Talkner, P.; Hänggi, P.; Łuczka, J.

    2016-12-01

    The spreading of a cloud of independent Brownian particles typically proceeds more effectively at higher temperatures, as it derives from the commonly known Sutherland-Einstein relation for systems in thermal equilibrium. Here, we report on a non-equilibrium situation in which the diffusion of a periodically driven Brownian particle moving in a periodic potential decreases with increasing temperature within a finite temperature window. We identify as the cause for this non-intuitive behaviour a dominant deterministic mechanism consisting of a few unstable periodic orbits embedded into a chaotic attractor together with thermal noise-induced dynamical changes upon varying temperature. The presented analysis is based on extensive numerical simulations of the corresponding Langevin equation describing the studied setup as well as on a simplified stochastic model formulated in terms of a three-state Markovian process. Because chaos exists in many natural as well as in artificial systems representing abundant areas of contemporary knowledge, the described mechanism may potentially be discovered in plentiful different contexts.

  1. A mathematical study of a model for childhood diseases with non-permanent immunity

    NASA Astrophysics Data System (ADS)

    Moghadas, S. M.; Gumel, A. B.

    2003-08-01

    Protecting children from diseases that can be prevented by vaccination is a primary goal of health administrators. Since vaccination is considered to be the most effective strategy against childhood diseases, the development of a framework that would predict the optimal vaccine coverage level needed to prevent the spread of these diseases is crucial. This paper provides this framework via qualitative and quantitative analysis of a deterministic mathematical model for the transmission dynamics of a childhood disease in the presence of a preventive vaccine that may wane over time. Using global stability analysis of the model, based on constructing a Lyapunov function, it is shown that the disease can be eradicated from the population if the vaccination coverage level exceeds a certain threshold value. It is also shown that the disease will persist within the population if the coverage level is below this threshold. These results are verified numerically by constructing, and then simulating, a robust semi-explicit second-order finite-difference method.

  2. Strong SH-to-Love wave scattering off the Southern California Continental Borderland

    USGS Publications Warehouse

    Yu, Chunquan; Zhan, Zhongwen; Hauksson, Egill; Cochran, Elizabeth S.

    2017-01-01

    Seismic scattering is commonly observed and results from wave propagation in heterogeneous medium. Yet, deterministic characterization of scatterers associated with lateral heterogeneities remains challenging. In this study, we analyze broadband waveforms recorded by the Southern California Seismic Network and observe strongly scattered Love waves following the arrival of teleseismic SH wave. These scattered Love waves travel approximately in the same (azimuthal) direction as the incident SH wave at a dominant period of ~10 s but at an apparent velocity of ~3.6 km/s as compared to the ~11 km/s for the SH wave. Back-projection suggests that this strong scattering is associated with pronounced bathymetric relief in the Southern California Continental Borderland, in particular the Patton Escarpment. Finite-difference simulations using a simplified 2-D bathymetric and crustal model are able to predict the arrival times and amplitudes of major scatterers. The modeling suggests a relatively low shear wave velocity in the Continental Borderland.

  3. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model

    NASA Astrophysics Data System (ADS)

    Markowich, Peter A.; Titi, Edriss S.; Trabelsi, Saber

    2016-04-01

    In this paper we introduce and analyze an algorithm for continuous data assimilation for a three-dimensional Brinkman-Forchheimer-extended Darcy (3D BFeD) model of porous media. This model is believed to be accurate when the flow velocity is too large for Darcy’s law to be valid, and additionally the porosity is not too small. The algorithm is inspired by ideas developed for designing finite-parameters feedback control for dissipative systems. It aims to obtain improved estimates of the state of the physical system by incorporating deterministic or noisy measurements and observations. Specifically, the algorithm involves a feedback control that nudges the large scales of the approximate solution toward those of the reference solution associated with the spatial measurements. In the first part of the paper, we present a few results of existence and uniqueness of weak and strong solutions of the 3D BFeD system. The second part is devoted to the convergence analysis of the data assimilation algorithm.

  4. Test-state approach to the quantum search problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597

    2011-05-15

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less

  5. Interaction of the sonic boom with atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Rusak, Zvi; Cole, Julian D.

    1994-01-01

    Theoretical research was carried out to study the effect of free-stream turbulence on sonic boom pressure fields. A new transonic small-disturbance model to analyze the interactions of random disturbances with a weak shock was developed. The model equation has an extended form of the classic small-disturbance equation for unsteady transonic aerodynamics. An alternative approach shows that the pressure field may be described by an equation that has an extended form of the classic nonlinear acoustics equation that describes the propagation of sound beams with narrow angular spectrum. The model shows that diffraction effects, nonlinear steepening effects, focusing and caustic effects and random induced vorticity fluctuations interact simultaneously to determine the development of the shock wave in space and time and the pressure field behind it. A finite-difference algorithm to solve the mixed type elliptic-hyperbolic flows around the shock wave was also developed. Numerical calculations of shock wave interactions with various deterministic and random fluctuations will be presented in a future report.

  6. Nanoscale magnetic ratchets based on shape anisotropy

    NASA Astrophysics Data System (ADS)

    Cui, Jizhai; Keller, Scott M.; Liang, Cheng-Yen; Carman, Gregory P.; Lynch, Christopher S.

    2017-02-01

    Controlling magnetization using piezoelectric strain through the magnetoelectric effect offers several orders of magnitude reduction in energy consumption for spintronic applications. However strain is a uniaxial effect and, unlike directional magnetic field or spin-polarized current, cannot induce a full 180° reorientation of the magnetization vector when acting alone. We have engineered novel ‘peanut’ and ‘cat-eye’ shaped nanomagnets on piezoelectric substrates that undergo repeated deterministic 180° magnetization rotations in response to individual electric-field-induced strain pulses by breaking the uniaxial symmetry using shape anisotropy. This behavior can be likened to a magnetic ratchet, advancing magnetization clockwise with each piezostrain trigger. The results were validated using micromagnetics implemented in a multiphysics finite elements code to simulate the engineered spatial and temporal magnetic behavior. The engineering principles start from a target device function and proceed to the identification of shapes that produce the desired function. This approach opens a broad design space for next generation magnetoelectric spintronic devices.

  7. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    PubMed Central

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  8. Noise-Driven Phenotypic Heterogeneity with Finite Correlation Time in Clonal Populations.

    PubMed

    Lee, UnJin; Skinner, John J; Reinitz, John; Rosner, Marsha Rich; Kim, Eun-Jin

    2015-01-01

    There has been increasing awareness in the wider biological community of the role of clonal phenotypic heterogeneity in playing key roles in phenomena such as cellular bet-hedging and decision making, as in the case of the phage-λ lysis/lysogeny and B. Subtilis competence/vegetative pathways. Here, we report on the effect of stochasticity in growth rate, cellular memory/intermittency, and its relation to phenotypic heterogeneity. We first present a linear stochastic differential model with finite auto-correlation time, where a randomly fluctuating growth rate with a negative average is shown to result in exponential growth for sufficiently large fluctuations in growth rate. We then present a non-linear stochastic self-regulation model where the loss of coherent self-regulation and an increase in noise can induce a shift from bounded to unbounded growth. An important consequence of these models is that while the average change in phenotype may not differ for various parameter sets, the variance of the resulting distributions may considerably change. This demonstrates the necessity of understanding the influence of variance and heterogeneity within seemingly identical clonal populations, while providing a mechanism for varying functional consequences of such heterogeneity. Our results highlight the importance of a paradigm shift from a deterministic to a probabilistic view of clonality in understanding selection as an optimization problem on noise-driven processes, resulting in a wide range of biological implications, from robustness to environmental stress to the development of drug resistance.

  9. Does Geophysics Need "A new kind of Science"?

    NASA Astrophysics Data System (ADS)

    Turcotte, D. L.; Rundle, J. B.

    2002-12-01

    Stephen Wolfram's book "A New Kind of Science" has received a great deal of attention in the last six months, both positive and negative. The theme of the book is that "cellular automata", which arise from spatial and temporal coarse-graining of equations of motion, provide the foundations for a new nonlinear science of "complexity". The old science is the science of partial differential equations. Some of the major contributions of this old science have been in geophysics, i.e. gravity, magnetics, seismic waves, heat flow. The basis of the new science is the use of massive computing and numerical simulations. The new science is motivated by the observations that many physical systems display a vast multiplicity of space and time scales, and have hidden dynamics that in many cases are impossible to directly observe. An example would be molecular dynamics. Statistical physics derives continuum equations from the discrete interactions between atoms and molecules, in the modern world the continuum equations are then discretized using finite differences, finite elements, etc. in order to obtain numerical solutions. Examples of widely used cellular automata models include diffusion limited aggregation and site percolation. Also the class of models that are said to exhibit self-organized criticality, the sand-pile model, the slider-block model, the forest-fire model. Applications of these models include drainage networks, seismicity, distributions of minerals,and the evolution of landforms and coastlines. Simple cellular automata models generate deterministic chaos, i.e. the logistic map.

  10. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  11. Columnar and Equiaxed Solidification of Al-7 wt.% Si Alloys in Reduced Gravity in the Framework of the CETSOL Project

    NASA Astrophysics Data System (ADS)

    Zimmermann, G.; Sturz, L.; Nguyen-Thi, H.; Mangelinck-Noel, N.; Li, Y. Z.; Gandin, C.-A.; Fleurisson, R.; Guillemot, G.; McFadden, S.; Mooney, R. P.; Voorhees, P.; Roosz, A.; Ronaföldi, A.; Beckermann, C.; Karma, A.; Chen, C.-H.; Warnken, N.; Saad, A.; Grün, G.-U.; Grohn, M.; Poitrault, I.; Pehl, T.; Nagy, I.; Todt, D.; Minster, O.; Sillekens, W.

    2017-08-01

    During casting, often a dendritic microstructure is formed, resulting in a columnar or an equiaxed grain structure, or leading to a transition from columnar to equiaxed growth (CET). The detailed knowledge of the critical parameters for the CET is important because the microstructure affects materials properties. To provide unique data for testing of fundamental theories of grain and microstructure formation, solidification experiments in microgravity environment were performed within the European Space Agency Microgravity Application Promotion (ESA MAP) project Columnar-to-Equiaxed Transition in SOLidification Processing (CETSOL). Reduced gravity allows for purely diffusive solidification conditions, i.e., suppressing melt flow and sedimentation and floatation effects. On-board the International Space Station, Al-7 wt.% Si alloys with and without grain refiners were solidified in different temperature gradients and with different cooling conditions. Detailed analysis of the microstructure and the grain structure showed purely columnar growth for nonrefined alloys. The CET was detected only for refined alloys, either as a sharp CET in the case of a sudden increase in the solidification velocity or as a progressive CET in the case of a continuous decrease of the temperature gradient. The present experimental data were used for numerical modeling of the CET with three different approaches: (1) a front tracking model using an equiaxed growth model, (2) a three-dimensional (3D) cellular automaton-finite element model, and (3) a 3D dendrite needle network method. Each model allows for predicting the columnar dendrite tip undercooling and the growth rate with respect to time. Furthermore, the positions of CET and the spatial extent of the CET, being sharp or progressive, are in reasonably good quantitative agreement with experimental measurements.

  12. AmapSim: a structural whole-plant simulator based on botanical knowledge and designed to host external functional models.

    PubMed

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-05-01

    AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.

  13. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  14. An immersed boundary-lattice Boltzmann model for biofilm growth and its impact on the NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Benioug, M.; Yang, X.

    2017-12-01

    The evolution of microbial phase within porous medium is a complex process that involves growth, mortality, and detachment of the biofilm or attachment of moving cells. A better understanding of the interactions among biofilm growth, flow and solute transport and a rigorous modeling of such processes are essential for a more accurate prediction of the fate of pollutants (e.g. NAPLs) in soils. However, very few works are focused on the study of such processes in multiphase conditions (oil/water/biofilm systems). Our proposed numerical model takes into account the mechanisms that control bacterial growth and its impact on the dissolution of NAPL. An Immersed Boundary - Lattice Boltzmann Model (IB-LBM) is developed for flow simulations along with non-boundary conforming finite volume methods (volume of fluid and reconstruction methods) used for reactive solute transport. A sophisticated cellular automaton model is also developed to describe the spatial distribution of bacteria. A series of numerical simulations have been performed on complex porous media. A quantitative diagram representing the transitions between the different biofilm growth patterns is proposed. The bioenhanced dissolution of NAPL in the presence of biofilms is simulated at the pore scale. A uniform dissolution approach has been adopted to describe the temporal evolution of trapped blobs. Our simulations focus on the dissolution of NAPL in abiotic and biotic conditions. In abiotic conditions, we analyze the effect of the spatial distribution of NAPL blobs on the dissolution rate under different assumptions (blobs size, Péclet number). In biotic conditions, different conditions are also considered (spatial distribution, reaction kinetics, toxicity) and analyzed. The simulated results are consistent with those obtained from the literature.

  15. Bipedal gait model for precise gait recognition and optimal triggering in foot drop stimulator: a proof of concept.

    PubMed

    Shaikh, Muhammad Faraz; Salcic, Zoran; Wang, Kevin I-Kai; Hu, Aiguo Patrick

    2018-03-10

    Electrical stimulators are often prescribed to correct foot drop walking. However, commercial foot drop stimulators trigger inappropriately under certain non-gait scenarios. Past researches addressed this limitation by defining stimulation control based on automaton of a gait cycle executed by foot drop of affected limb/foot only. Since gait is a collaborative activity of both feet, this research highlights the role of normal foot for robust gait detection and stimulation triggering. A novel bipedal gait model is proposed where gait cycle is realized as an automaton based on concurrent gait sub-phases (states) from each foot. The input for state transition is fused information from feet-worn pressure and inertial sensors. Thereafter, a bipedal gait model-based stimulation control algorithm is developed. As a feasibility study, bipedal gait model and stimulation control are evaluated in real-time simulation manner on normal and simulated foot drop gait measurements from 16 able-bodied participants with three speed variations, under inappropriate triggering scenarios and with foot drop rehabilitation exercises. Also, the stimulation control employed in commercial foot drop stimulators and single foot gait-based foot drop stimulators are compared alongside. Gait detection accuracy (98.9%) and precise triggering under all investigations prove bipedal gait model reliability. This infers that gait detection leveraging bipedal periodicity is a promising strategy to rectify prevalent stimulation triggering deficiencies in commercial foot drop stimulators. Graphical abstract Bipedal information-based gait recognition and stimulation triggering.

  16. A Hybrid Cellular Automaton Model of Clonal Evolution in Cancer: The Emergence of the Glycolytic Phenotype

    PubMed Central

    Gerlee, P.; Anderson, A.R.A.

    2009-01-01

    We present a cellular automaton model of clonal evolution in cancer aimed at investigating the emergence of the glycolytic phenotype. In the model each cell is equipped with a micro-environment response network that determines the behaviour or phenotype of the cell based on the local environment. The response network is modelled using a feed-forward neural network, which is subject to mutations when the cells divide. This implies that cells might react differently to the environment and when space and nutrients are limited only the fittest cells will survive. With this model we have investigated the impact of the environment on the growth dynamics of the tumour. In particular we have analysed the influence of the tissue oxygen concentration and extra-cellular matrix density on the dynamics of the model. We found that the environment influences both the growth and evolutionary dynamics of the tumour. For low oxygen concentration we observe tumours with a fingered morphology, while increasing the matrix density gives rise to more compact tumours with wider fingers. The distribution of phenotypes in the tumour is also affected, and we observe that the glycolytic phenotype is most likely to emerge in a poorly oxygenated tissue with a high matrix density. Our results suggest that it is the combined effect of the oxygen concentration and matrix density that creates an environment where the glycolytic phenotype has a growth advantage and consequently is most likely to appear. PMID:18068192

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, Sourabh K.

    Although geometric imperfections have a detrimental effect on buckling, imperfection sensitivity has not been well studied in the past during design of sinusoidal micro and nano-scale structures via wrinkling of supported thin films. This is likely because one is more interested in predicting the shape/size of the resultant patterns than the buckling bifurcation onset strain during fabrication of such wrinkled structures. Herein, I have demonstrated that even modest geometric imperfections alter the final wrinkled mode shapes via the mode locking phenomenon wherein the imperfection mode grows in exclusion to the natural mode of the system. To study the effect ofmore » imperfections on mode locking, I have (i) developed a finite element mesh perturbation scheme to generate arbitrary geometric imperfections in the system and (ii) performed a parametric study via finite element methods to link the amplitude and period of the sinusoidal imperfections to the observed wrinkle mode shape and size. Based on this, a non-dimensional geometric parameter has been identified that characterizes the effect of imperfection on the mode locking phenomenon – the equivalent imperfection size. An upper limit for this equivalent imperfection size has been identified via a combination of analytical and finite element modeling. During compression of supported thin films, the system gets “locked” into the imperfection mode if its equivalent imperfection size is above this critical limit. For the polydimethylsiloxane/glass bilayer with a wrinkle period of 2 µm, this mode lock-in limit corresponds to an imperfection amplitude of 32 nm for an imperfection period of 5 µm and 8 nm for an imperfection period of 0.8 µm. Interestingly, when the non-dimensional critical imperfection size is scaled by the bifurcation onset strain, the scaled critical size depends solely on the ratio of the imperfection to natural periods. Furthermore, the computational data generated here can be generalized beyond the specific natural periods and bilayer systems studied to enable deterministic design of a variety of wrinkled micro and nano-scale structures.« less

  18. A Markov model for the temporal dynamics of balanced random networks of finite size

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644

  19. Dynamic data-driven integrated flare model based on self-organized criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.

    2013-05-01

    Context. We interpret solar flares as events originating in active regions that have reached the self-organized critical state. We describe them with a dynamic integrated flare model whose initial conditions and driving mechanism are derived from observations. Aims: We investigate whether well-known scaling laws observed in the distribution functions of characteristic flare parameters are reproduced after the self-organized critical state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy, and event duration follow the expected scaling laws, we first applied the previously reported static cellular automaton model to a time series of seven solar vector magnetograms of the NOAA active region 8210 recorded by the Imaging Vector Magnetograph on May 1 1998 between 18:59 UT and 23:16 UT until the self-organized critical state was reached. We then evolved the magnetic field between these processed snapshots through spline interpolation, mimicking a natural driver in our dynamic model. We identified magnetic discontinuities that exceeded a threshold in the Laplacian of the magnetic field after each interpolation step. These discontinuities were relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent interpolation and relaxation steps covered all transitions until the end of the processed magnetograms' sequence. We additionally advanced each magnetic configuration that has reached the self-organized critical state (SOC configuration) by the static model until 50 more flares were triggered, applied the dynamic model again to the new sequence, and repeated the same process sufficiently often to generate adequate statistics. Physical requirements, such as the divergence-free condition for the magnetic field, were approximately imposed. Results: We obtain robust power laws in the distribution functions of the modeled flaring events with scaling indices that agree well with observations. Peak and total flare energy obey single power laws with indices -1.65 ± 0.11 and -1.47 ± 0.13, while the flare duration is best fitted with a double power law (-2.15 ± 0.15 and -3.60 ± 0.09 for the flatter and steeper parts, respectively). Conclusions: We conclude that well-known statistical properties of flares are reproduced after active regions reach the state of self-organized criticality. A significant enhancement of our refined cellular automaton model is that it initiates and further drives the simulation from observed evolving vector magnetograms, thus facilitating energy calculation in physical units, while a separation between MHD and kinetic timescales is possible by assigning distinct MHD timestamps to each interpolation step.

  20. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

Top