Sample records for deterministic multivalued logic

  1. ASP-based method for the enumeration of attractors in non-deterministic synchronous and asynchronous multi-valued networks.

    PubMed

    Ben Abdallah, Emna; Folschette, Maxime; Roux, Olivier; Magnin, Morgan

    2017-01-01

    This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors. We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature. The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.

  2. Trinary arithmetic and logic unit (TALU) using savart plate and spatial light modulator (SLM) suitable for optical computation in multivalued logic

    NASA Astrophysics Data System (ADS)

    Ghosh, Amal K.; Bhattacharya, Animesh; Raul, Moumita; Basuray, Amitabha

    2012-07-01

    Arithmetic logic unit (ALU) is the most important unit in any computing system. Optical computing is becoming popular day-by-day because of its ultrahigh processing speed and huge data handling capability. Obviously for the fast processing we need the optical TALU compatible with the multivalued logic. In this regard we are communicating the trinary arithmetic and logic unit (TALU) in modified trinary number (MTN) system, which is suitable for the optical computation and other applications in multivalued logic system. Here the savart plate and spatial light modulator (SLM) based optoelectronic circuits have been used to exploit the optical tree architecture (OTA) in optical interconnection network.

  3. Trinary flip-flops using Savart plate and spatial light modulator for optical computation in multivalued logic

    NASA Astrophysics Data System (ADS)

    Ghosh, Amal K.; Basuray, Amitabha

    2008-11-01

    The memory devices in multi-valued logic are of most significance in modern research. This paper deals with the implementation of basic memory devices in multi-valued logic using Savart plate and spatial light modulator (SLM) based optoelectronic circuits. Photons are used here as the carrier to speed up the operations. Optical tree architecture (OTA) has been also utilized in the optical interconnection network. We have exploited the advantages of Savart plates, SLMs and OTA and proposed the SLM based high speed JK, D-type and T-type flip-flops in a trinary system.

  4. Phosphorene/rhenium disulfide heterojunction-based negative differential resistance device for multi-valued logic

    NASA Astrophysics Data System (ADS)

    Shim, Jaewoo; Oh, Seyong; Kang, Dong-Ho; Jo, Seo-Hyeon; Ali, Muhammad Hasnain; Choi, Woo-Young; Heo, Keun; Jeon, Jaeho; Lee, Sungjoo; Kim, Minwoo; Song, Young Jae; Park, Jin-Hong

    2016-11-01

    Recently, negative differential resistance devices have attracted considerable attention due to their folded current-voltage characteristic, which presents multiple threshold voltage values. Because of this remarkable property, studies associated with the negative differential resistance devices have been explored for realizing multi-valued logic applications. Here we demonstrate a negative differential resistance device based on a phosphorene/rhenium disulfide (BP/ReS2) heterojunction that is formed by type-III broken-gap band alignment, showing high peak-to-valley current ratio values of 4.2 and 6.9 at room temperature and 180 K, respectively. Also, the carrier transport mechanism of the BP/ReS2 negative differential resistance device is investigated in detail by analysing the tunnelling and diffusion currents at various temperatures with the proposed analytic negative differential resistance device model. Finally, we demonstrate a ternary inverter as a multi-valued logic application. This study of a two-dimensional material heterojunction is a step forward toward future multi-valued logic device research.

  5. Phosphorene/rhenium disulfide heterojunction-based negative differential resistance device for multi-valued logic

    PubMed Central

    Shim, Jaewoo; Oh, Seyong; Kang, Dong-Ho; Jo, Seo-Hyeon; Ali, Muhammad Hasnain; Choi, Woo-Young; Heo, Keun; Jeon, Jaeho; Lee, Sungjoo; Kim, Minwoo; Song, Young Jae; Park, Jin-Hong

    2016-01-01

    Recently, negative differential resistance devices have attracted considerable attention due to their folded current–voltage characteristic, which presents multiple threshold voltage values. Because of this remarkable property, studies associated with the negative differential resistance devices have been explored for realizing multi-valued logic applications. Here we demonstrate a negative differential resistance device based on a phosphorene/rhenium disulfide (BP/ReS2) heterojunction that is formed by type-III broken-gap band alignment, showing high peak-to-valley current ratio values of 4.2 and 6.9 at room temperature and 180 K, respectively. Also, the carrier transport mechanism of the BP/ReS2 negative differential resistance device is investigated in detail by analysing the tunnelling and diffusion currents at various temperatures with the proposed analytic negative differential resistance device model. Finally, we demonstrate a ternary inverter as a multi-valued logic application. This study of a two-dimensional material heterojunction is a step forward toward future multi-valued logic device research. PMID:27819264

  6. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Method of implementation of optoelectronic multiparametric signal processing systems based on multivalued-logic principles

    NASA Astrophysics Data System (ADS)

    Arestova, M. L.; Bykovskii, A. Yu

    1995-10-01

    An architecture is proposed for a specialised optoelectronic multivalued logic processor based on the Allen—Givone algebra. The processor is intended for multiparametric processing of data arriving from a large number of sensors or for tackling spectral analysis tasks. The processor architecture makes it possible to obtain an approximate general estimate of the state of an object being diagnosed on a p-level scale. Optoelectronic systems are proposed for MAXIMUM, MINIMUM, and LITERAL logic gates, based on optical-frequency encoding of logic levels. Corresponding logic gates form a complete set of logic functions in the Allen—Givone algebra.

  7. Implementation of Multivariable Logic Functions in Parallel by Electrically Addressing a Molecule of Three Dopants in Silicon.

    PubMed

    Fresch, Barbara; Bocquel, Juanita; Hiluf, Dawit; Rogge, Sven; Levine, Raphael D; Remacle, Françoise

    2017-07-05

    To realize low-power, compact logic circuits, one can explore parallel operation on single nanoscale devices. An added incentive is to use multivalued (as distinct from Boolean) logic. Here, we theoretically demonstrate that the computation of all the possible outputs of a multivariate, multivalued logic function can be implemented in parallel by electrical addressing of a molecule made up of three interacting dopant atoms embedded in Si. The electronic states of the dopant molecule are addressed by pulsing a gate voltage. By simulating the time evolution of the non stationary electronic density built by the gate voltage, we show that one can implement a molecular decision tree that provides in parallel all the outputs for all the inputs of the multivariate, multivalued logic function. The outputs are encoded in the populations and in the bond orders of the dopant molecule, which can be measured using an STM tip. We show that the implementation of the molecular logic tree is equivalent to a spectral function decomposition. The function that is evaluated can be field-programmed by changing the time profile of the pulsed gate voltage. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Surmounting the Cartesian Cut Through Philosophy, Physics, Logic, Cybernetics, and Geometry: Self-reference, Torsion, the Klein Bottle, the Time Operator, Multivalued Logics and Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Rapoport, Diego L.

    2011-01-01

    In this transdisciplinary article which stems from philosophical considerations (that depart from phenomenology—after Merleau-Ponty, Heidegger and Rosen—and Hegelian dialectics), we develop a conception based on topological (the Moebius surface and the Klein bottle) and geometrical considerations (based on torsion and non-orientability of manifolds), and multivalued logics which we develop into a unified world conception that surmounts the Cartesian cut and Aristotelian logic. The role of torsion appears in a self-referential construction of space and time, which will be further related to the commutator of the True and False operators of matrix logic, still with a quantum superposed state related to a Moebius surface, and as the physical field at the basis of Spencer-Brown's primitive distinction in the protologic of the calculus of distinction. In this setting, paradox, self-reference, depth, time and space, higher-order non-dual logic, perception, spin and a time operator, the Klein bottle, hypernumbers due to Musès which include non-trivial square roots of ±1 and in particular non-trivial nilpotents, quantum field operators, the transformation of cognition to spin for two-state quantum systems, are found to be keenly interwoven in a world conception compatible with the philosophical approach taken for basis of this article. The Klein bottle is found not only to be the topological in-formation for self-reference and paradox whose logical counterpart in the calculus of indications are the paradoxical imaginary time waves, but also a classical-quantum transformer (Hadamard's gate in quantum computation) which is indispensable to be able to obtain a complete multivalued logical system, and still to generate the matrix extension of classical connective Boolean logic. We further find that the multivalued logic that stems from considering the paradoxical equation in the calculus of distinctions, and in particular, the imaginary solutions to this equation, generates the matrix logic which supersedes the classical logic of connectives and which has for particular subtheories fuzzy and quantum logics. Thus, from a primitive distinction in the vacuum plane and the axioms of the calculus of distinction, we can derive by incorporating paradox, the world conception succinctly described above.

  9. Multi-valued logic gates based on ballistic transport in quantum point contacts.

    PubMed

    Seo, M; Hong, C; Lee, S-Y; Choi, H K; Kim, N; Chung, Y; Umansky, V; Mahalu, D

    2014-01-22

    Multi-valued logic gates, which can handle quaternary numbers as inputs, are developed by exploiting the ballistic transport properties of quantum point contacts in series. The principle of a logic gate that finds the minimum of two quaternary number inputs is demonstrated. The device is scalable to allow multiple inputs, which makes it possible to find the minimum of multiple inputs in a single gate operation. Also, the principle of a half-adder for quaternary number inputs is demonstrated. First, an adder that adds up two quaternary numbers and outputs the sum of inputs is demonstrated. Second, a device to express the sum of the adder into two quaternary digits [Carry (first digit) and Sum (second digit)] is demonstrated. All the logic gates presented in this paper can in principle be extended to allow decimal number inputs with high quality QPCs.

  10. Parallel and Multivalued Logic by the Two-Dimensional Photon-Echo Response of a Rhodamine–DNA Complex

    PubMed Central

    2015-01-01

    Implementing parallel and multivalued logic operations at the molecular scale has the potential to improve the miniaturization and efficiency of a new generation of nanoscale computing devices. Two-dimensional photon-echo spectroscopy is capable of resolving dynamical pathways on electronic and vibrational molecular states. We experimentally demonstrate the implementation of molecular decision trees, logic operations where all possible values of inputs are processed in parallel and the outputs are read simultaneously, by probing the laser-induced dynamics of populations and coherences in a rhodamine dye mounted on a short DNA duplex. The inputs are provided by the bilinear interactions between the molecule and the laser pulses, and the output values are read from the two-dimensional molecular response at specific frequencies. Our results highlights how ultrafast dynamics between multiple molecular states induced by light–matter interactions can be used as an advantage for performing complex logic operations in parallel, operations that are faster than electrical switching. PMID:25984269

  11. Minimally inconsistent reasoning in Semantic Web.

    PubMed

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.

  12. Minimally inconsistent reasoning in Semantic Web

    PubMed Central

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning. PMID:28750030

  13. Accomplishment Summary 1968-1969. Biological Computer Laboratory.

    ERIC Educational Resources Information Center

    Von Foerster, Heinz; And Others

    This report summarizes theoretical, applied, and experimental studies in the areas of computational principles in complex intelligent systems, cybernetics, multivalued logic, and the mechanization of cognitive processes. This work is summarized under the following topic headings: properties of complex dynamic systems; computers and the language…

  14. Molecular processors: from qubits to fuzzy logic.

    PubMed

    Gentili, Pier Luigi

    2011-03-14

    Single molecules or their assemblies are information processing devices. Herein it is demonstrated how it is possible to process different types of logic through molecules. As long as decoherent effects are maintained far away from a pure quantum mechanical system, quantum logic can be processed. If the collapse of superimposed or entangled wavefunctions is unavoidable, molecules can still be used to process either crisp (binary or multi-valued) or fuzzy logic. The way for implementing fuzzy inference engines is declared and it is supported by the examples of molecular fuzzy logic systems devised so far. Fuzzy logic is drawing attention in the field of artificial intelligence, because it models human reasoning quite well. This ability may be due to some structural analogies between a fuzzy logic system and the human nervous system. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  16. Adiabatic pipelining: a key to ternary computing with quantum dots.

    PubMed

    Pečar, P; Ramšak, A; Zimic, N; Mraz, M; Lebar Bajec, I

    2008-12-10

    The quantum-dot cellular automaton (QCA), a processing platform based on interacting quantum dots, was introduced by Lent in the mid-1990s. What followed was an exhilarating period with the development of the line, the functionally complete set of logic functions, as well as more complex processing structures, however all in the realm of binary logic. Regardless of these achievements, it has to be acknowledged that the use of binary logic is in computing systems mainly the end result of the technological limitations, which the designers had to cope with in the early days of their design. The first advancement of QCAs to multi-valued (ternary) processing was performed by Lebar Bajec et al, with the argument that processing platforms of the future should not disregard the clear advantages of multi-valued logic. Some of the elementary ternary QCAs, necessary for the construction of more complex processing entities, however, lead to a remarkable increase in size when compared to their binary counterparts. This somewhat negates the advantages gained by entering the ternary computing domain. As it turned out, even the binary QCA had its initial hiccups, which have been solved by the introduction of adiabatic switching and the application of adiabatic pipeline approaches. We present here a study that introduces adiabatic switching into the ternary QCA and employs the adiabatic pipeline approach to successfully solve the issues of elementary ternary QCAs. What is more, the ternary QCAs presented here are sizewise comparable to binary QCAs. This in our view might serve towards their faster adoption.

  17. Surface-confined assemblies and polymers for molecular logic.

    PubMed

    de Ruiter, Graham; van der Boom, Milko E

    2011-08-16

    Stimuli responsive materials are capable of mimicking the operation characteristics of logic gates such as AND, OR, NOR, and even flip-flops. Since the development of molecular sensors and the introduction of the first AND gate in solution by de Silva in 1993, Molecular (Boolean) Logic and Computing (MBLC) has become increasingly popular. In this Account, we present recent research activities that focus on MBLC with electrochromic polymers and metal polypyridyl complexes on a solid support. Metal polypyridyl complexes act as useful sensors to a variety of analytes in solution (i.e., H(2)O, Fe(2+/3+), Cr(6+), NO(+)) and in the gas phase (NO(x) in air). This information transfer, whether the analyte is present, is based on the reversible redox chemistry of the metal complexes, which are stable up to 200 °C in air. The concurrent changes in the optical properties are nondestructive and fast. In such a setup, the input is directly related to the output and, therefore, can be represented by one-input logic gates. These input-output relationships are extendable for mimicking the diverse functions of essential molecular logic gates and circuits within a set of Boolean algebraic operations. Such a molecular approach towards Boolean logic has yielded a series of proof-of-concept devices: logic gates, multiplexers, half-adders, and flip-flop logic circuits. MBLC is a versatile and, potentially, a parallel approach to silicon circuits: assemblies of these molecular gates can perform a wide variety of logic tasks through reconfiguration of their inputs. Although these developments do not require a semiconductor blueprint, similar guidelines such as signal propagation, gate-to-gate communication, propagation delay, and combinatorial and sequential logic will play a critical role in allowing this field to mature. For instance, gate-to-gate communication by chemical wiring of the gates with metal ions as electron carriers results in the integration of stand-alone systems: the output of one gate is used as the input for another gate. Using the same setup, we were able to display both combinatorial and sequential logic. We have demonstrated MBLC by coupling electrochemical inputs with optical readout, which resulted in various logic architectures built on a redox-active, functionalized surface. Electrochemically operated sequential logic systems such as flip-flops, multivalued logic, and multistate memory could enhance computational power without increasing spatial requirements. Applying multivalued digits in data storage could exponentially increase memory capacity. Furthermore, we evaluate the pros and cons of MBLC and identify targets for future research in this Account. © 2011 American Chemical Society

  18. Canonical multi-valued input Reed-Muller trees and forms

    NASA Technical Reports Server (NTRS)

    Perkowski, M. A.; Johnson, P. D.

    1991-01-01

    There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions.

  19. The fuzzy cube and causal efficacy: representation of concomitant mechanisms in stroke.

    PubMed

    Jobe, Thomas H.; Helgason, Cathy M.

    1998-04-01

    Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon-stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.

  20. Fuzzy logic and causal reasoning with an 'n' of 1 for diagnosis and treatment of the stroke patient.

    PubMed

    Helgason, Cathy M; Jobe, Thomas H

    2004-03-01

    The current scientific model for clinical decision-making is founded on binary or Aristotelian logic, classical set theory and probability-based statistics. Evidence-based medicine has been established as the basis for clinical recommendations. There is a problem with this scientific model when the physician must diagnose and treat the individual patient. The problem is a paradox, which is that the scientific model of evidence-based medicine is based upon a hypothesis aimed at the group and therefore, any conclusions cannot be extrapolated but to a degree to the individual patient. This extrapolation is dependent upon the expertise of the physician. A fuzzy logic multivalued-based scientific model allows this expertise to be numerically represented and solves the clinical paradox of evidence-based medicine.

  1. Plausible inference: A multi-valued logic for problem solving

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1979-01-01

    A new logic is developed which permits continuously variable strength of belief in the truth of assertions. Four inference rules result, with formal logic as a limiting case. Quantification of belief is defined. Propagation of belief to linked assertions results from dependency-based techniques of truth maintenance so that local consistency is achieved or contradiction discovered in problem solving. Rules for combining, confirming, or disconfirming beliefs are given, and several heuristics are suggested that apply to revising already formed beliefs in the light of new evidence. The strength of belief that results in such revisions based on conflicting evidence are a highly subjective phenomenon. Certain quantification rules appear to reflect an orderliness in the subjectivity. Several examples of reasoning by plausible inference are given, including a legal example and one from robot learning. Propagation of belief takes place in directions forbidden in formal logic and this results in conclusions becoming possible for a given set of assertions that are not reachable by formal logic.

  2. Multi-Valued Logic, Neutrosophy, and Schrödinger Equation

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin; Christianto, Victor

    2017-04-01

    Discussing some paradoxes in Quantum Mechanics from the viewpoint of Multi-Valued-logic pioneered by Lukasiewicz, and the recent concept Neutrosophic Logic. Essentially, this new concept offers new insights on the idea of `identity', which too often it has been accepted as given. Neutrosophy itself was developed in attempt to generalize Fuzzy-Logic introduced by L. Zadeh. The discussion is motivated by observation that despite almost eight decades, there is indication that some of those paradoxes known in Quantum Physics are not yet solved. In our knowledge, this is because the solution of those paradoxes requires re-examination of the foundations of logic itself, in particular on the notion of identity and multi-valuedness of entity. The discussion is also intended for young physicist fellows who think that somewhere there should be a `complete' explanation of these paradoxes in Quantum Mechanics. If this it doesn't answer all of their questions, it is our hope that at least it offers a new alternative viewpoint for these old questions.

  3. Integrated logic circuits using single-atom transistors

    PubMed Central

    Mol, J. A.; Verduijn, J.; Levine, R. D.; Remacle, F.

    2011-01-01

    Scaling down the size of computing circuits is about to reach the limitations imposed by the discrete atomic structure of matter. Reducing the power requirements and thereby dissipation of integrated circuits is also essential. New paradigms are needed to sustain the rate of progress that society has become used to. Single-atom transistors, SATs, cascaded in a circuit are proposed as a promising route that is compatible with existing technology. We demonstrate the use of quantum degrees of freedom to perform logic operations in a complementary-metal–oxide–semiconductor device. Each SAT performs multilevel logic by electrically addressing the electronic states of a dopant atom. A single electron transistor decodes the physical multivalued output into the conventional binary output. A robust scalable circuit of two concatenated full adders is reported, where by utilizing charge and quantum degrees of freedom, the functionality of the transistor is pushed far beyond that of a simple switch. PMID:21808050

  4. Parity generator and parity checker in the modified trinary number system using savart plate and spatial light modulator

    NASA Astrophysics Data System (ADS)

    Ghosh, Amal K.

    2010-09-01

    The parity generators and the checkers are the most important circuits in communication systems. With the development of multi-valued logic (MVL), the proposed system with parity generators and checkers is the most required using the recently developed optoelectronic technology in the modified trinary number (MTN) system. This system also meets up the tremendous needs of speeds by exploiting the savart plates and spatial light modulators (SLM) in the optical tree architecture (OTA).

  5. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  6. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  7. Light-Triggered Ternary Device and Inverter Based on Heterojunction of van der Waals Materials.

    PubMed

    Shim, Jaewoo; Jo, Seo-Hyeon; Kim, Minwoo; Song, Young Jae; Kim, Jeehwan; Park, Jin-Hong

    2017-06-27

    Multivalued logic (MVL) devices/circuits have received considerable attention because the binary logic used in current Si complementary metal-oxide-semiconductor (CMOS) technology cannot handle the predicted information throughputs and energy demands of the future. To realize MVL, the conventional transistor platform needs to be redesigned to have two or more distinctive threshold voltages (V TH s). Here, we report a finding: the photoinduced drain current in graphene/WSe 2 heterojunction transistors unusually decreases with increasing gate voltage under illumination, which we refer to as the light-induced negative differential transconductance (L-NDT) phenomenon. We also prove that such L-NDT phenomenon in specific bias ranges originates from a variable potential barrier at a graphene/WSe 2 junction due to a gate-controllable graphene electrode. This finding allows us to conceive graphene/WSe 2 -based MVL logic circuits by using the I D -V G characteristics with two distinctive V TH s. Based on this finding, we further demonstrate a light-triggered ternary inverter circuit with three stable logical states (ΔV out of each state <0.05 V). Our study offers the pathway to substantialize MVL systems.

  8. Origins of Chaos in Autonomous Boolean Networks

    NASA Astrophysics Data System (ADS)

    Socolar, Joshua; Cavalcante, Hugo; Gauthier, Daniel; Zhang, Rui

    2010-03-01

    Networks with nodes consisting of ideal Boolean logic gates are known to display either steady states, periodic behavior, or an ultraviolet catastrophe where the number of logic-transition events circulating in the network per unit time grows as a power-law. In an experiment, non-ideal behavior of the logic gates prevents the ultraviolet catastrophe and may lead to deterministic chaos. We identify certain non-ideal features of real logic gates that enable chaos in experimental networks. We find that short-pulse rejection and the asymmetry between the logic states tends to engender periodic behavior. On the other hand, a memory effect termed ``degradation'' can generate chaos. Our results strongly suggest that deterministic chaos can be expected in a large class of experimental Boolean-like networks. Such devices may find application in a variety of technologies requiring fast complex waveforms or flat power spectra. The non-ideal effects identified here also have implications for the statistics of attractors in large complex networks.

  9. Auxiliary principle technique and iterative algorithm for a perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems.

    PubMed

    Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais

    2017-01-01

    In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.

  10. Therapeutic target discovery using Boolean network attractors: improvements of kali

    PubMed Central

    Guziolowski, Carito

    2018-01-01

    In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modelling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are (i) the possibility to work on asynchronous Boolean networks, (ii) a finer assessment of therapeutic targets and (iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system, such as a Boolean network, are associated with the phenotypes of the modelled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness. kali is illustrated on an example network and used on a biological case study. The case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results. However, like any computational tool, kali can predict but cannot replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery. PMID:29515890

  11. Non-Linear Effects in Knowledge Production

    NASA Astrophysics Data System (ADS)

    Purica, Ionut

    2007-04-01

    The generation of technological knowledge is paramount to our present development; the production of technological knowledge is governed by the same Cobb Douglas type model, with the means of research and the intelligence level replacing capital, respectively labor. We are exploring the basic behavior of present days' economies that are producing technological knowledge, along with the `usual' industrial production and determine a basic behavior that turns out to be a `Henon attractor'. Measures are introduced for the gain of technological knowledge and for the information of technological sequences that are based respectively on the underlying multi-valued modal logic of the technological research and on nonlinear thermodynamic considerations.

  12. Large conditional single-photon cross-phase modulation

    NASA Astrophysics Data System (ADS)

    Beck, Kristin; Hosseini, Mahdi; Duan, Yiheng; Vuletic, Vladan

    2016-05-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of up to π / 3 between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. With a moderate improvement in cavity finesse, our system can reach a coherent phase shift of p at low loss, enabling deterministic and universal photonic quantum logic. Preprint: arXiv:1512.02166 [quant-ph

  13. Large conditional single-photon cross-phase modulation

    PubMed Central

    Hosseini, Mahdi; Duan, Yiheng; Vuletić, Vladan

    2016-01-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of π/6 (and up to π/3 by postselection on photons that remain in the system longer than average) between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. By upgrading to a state-of-the-art cavity, our system can reach a coherent phase shift of π at low loss, enabling deterministic and universal photonic quantum logic. PMID:27519798

  14. Certainty, Determinism, and Predictability in Theories of Instructional Design: Lessons from Science.

    ERIC Educational Resources Information Center

    Jonassen, David H.; And Others

    1997-01-01

    The strongly positivist beliefs on which traditional conceptions of instructional design (ID) are based derive from Aristotelian logic and oversimplify the world, reducing human learning and performance to a repertoire of manipulable behaviors. Reviews the cases against deterministic predictability and discusses hermeneutic, fuzzy logic, and chaos…

  15. What about Social Value?

    ERIC Educational Resources Information Center

    Beauvois, Jean-Leon; Depret, Eric

    2008-01-01

    We focus on three aspects of the articles of Reyna, of Perry, Stupnisky, Daniels and Haynes, and of Murdock, Beauchamp and Hinton. The first aspect is the logic of causal chain, a logic that we differentiate from a more deterministic approach. The second one is the mode of corrective action (attribution retraining) that is planned for students,…

  16. Multiple negative differential resistance devices with ultra-high peak-to-valley current ratio for practical multi-valued logic and memory applications

    NASA Astrophysics Data System (ADS)

    Shin, Sunhae; Rok Kim, Kyung

    2015-06-01

    In this paper, we propose a novel multiple negative differential resistance (NDR) device with ultra-high peak-to-valley current ratio (PVCR) over 106 by combining tunnel diode with a conventional MOSFET, which suppresses the valley current with transistor off-leakage level. Band-to-band tunneling (BTBT) in tunnel junction provides the first peak, and the second peak and valley are generated from the suppression of diffusion current in tunnel diode by the off-state MOSFET. The multiple NDR curves can be controlled by doping concentration of tunnel junction and the threshold voltage of MOSFET. By using complementary multiple NDR devices, five-state memory is demonstrated only with six transistors.

  17. A multi-valued neutrosophic qualitative flexible approach based on likelihood for multi-criteria decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.

    2017-01-01

    In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.

  18. Evolutionary Oseen Model for Generalized Newtonian Fluid with Multivalued Nonmonotone Friction Law

    NASA Astrophysics Data System (ADS)

    Migórski, Stanisław; Dudek, Sylwia

    2018-03-01

    The paper deals with the non-stationary Oseen system of equations for the generalized Newtonian incompressible fluid with multivalued and nonmonotone frictional slip boundary conditions. First, we provide a result on existence of a unique solution to an abstract evolutionary inclusion involving the Clarke subdifferential term for a nonconvex function. We employ a method based on a surjectivity theorem for multivalued L-pseudomonotone operators. Then, we exploit the abstract result to prove the weak unique solvability of the Oseen system.

  19. An Investigation of Quantum Dot Super Lattice Use in Nonvolatile Memory and Transistors

    NASA Astrophysics Data System (ADS)

    Mirdha, P.; Parthasarathy, B.; Kondo, J.; Chan, P.-Y.; Heller, E.; Jain, F. C.

    2018-02-01

    Site-specific self-assembled colloidal quantum dots (QDs) will deposit in two layers only on p-type substrate to form a QD superlattice (QDSL). The QDSL structure has been integrated into the floating gate of a nonvolatile memory component and has demonstrated promising results in multi-bit storage, ease of fabrication, and memory retention. Additionally, multi-valued logic devices and circuits have been created by using QDSL structures which demonstrated ternary and quaternary logic. With increasing use of site-specific self-assembled QDSLs, fundamental understanding of silicon and germanium QDSL charge storage capability, self-assembly on specific surfaces, uniform distribution, and mini-band formation has to be understood for successful implementation in devices. In this work, we investigate the differences in electron charge storage by building metal-oxide semiconductor (MOS) capacitors and using capacitance and voltage measurements to quantify the storage capabilities. The self-assembly process and distribution density of the QDSL is done by obtaining atomic force microscopy (AFM) results on line samples. Additionally, we present a summary of the theoretical density of states in each of the QDSLs.

  20. Transport Properties of a MoS2/WSe2 Heterojunction Transistor and Its Potential for Application.

    PubMed

    Nourbakhsh, Amirhasan; Zubair, Ahmad; Dresselhaus, Mildred S; Palacios, Tomás

    2016-02-10

    This paper studies band-to-band tunneling in the transverse and lateral directions of van der Waals MoS2/WSe2 heterojunctions. We observe room-temperature negative differential resistance (NDR) in a heterojunction diode comprised of few-layer WSe2 stacked on multilayer MoS2. The presence of NDR is attributed to the lateral band-to-band tunneling at the edge of the MoS2/WSe2 heterojunction. The backward tunneling diode shows an average conductance slope of 75 mV/dec with a high curvature coefficient of 62 V(-1). Associated with the tunnel-diode characteristics, a positive-to-negative transconductance in the MoS2/WSe2 heterojunction transistors is observed. The transition is induced by strong interlayer coupling between the films, which results in charge density and energy-band modulation. The sign change in transconductance is particularly useful for multivalued logic (MVL) circuits, and we therefore propose and demonstrate for the first time an MVL-inverter that shows three levels of logic using one pair of p-type transistors.

  1. Bird's-eye view on noise-based logic.

    PubMed

    Kish, Laszlo B; Granqvist, Claes G; Horvath, Tamas; Klappenecker, Andreas; Wen, He; Bezrukov, Sergey M

    2014-01-01

    Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as ( i ) What does practical determinism mean? ( ii ) Is noise-based logic a Turing machine? ( iii ) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, ( iv ) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.

  2. Bird's-eye view on noise-based logic

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Granqvist, Claes G.; Horvath, Tamas; Klappenecker, Andreas; Wen, He; Bezrukov, Sergey M.

    2014-09-01

    Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as (i) What does practical determinism mean? (ii) Is noise-based logic a Turing machine? (iii) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, (iv) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.

  3. Quasilinear parabolic variational inequalities with multi-valued lower-order terms

    NASA Astrophysics Data System (ADS)

    Carl, Siegfried; Le, Vy K.

    2014-10-01

    In this paper, we provide an analytical frame work for the following multi-valued parabolic variational inequality in a cylindrical domain : Find and an such that where is some closed and convex subset, A is a time-dependent quasilinear elliptic operator, and the multi-valued function is assumed to be upper semicontinuous only, so that Clarke's generalized gradient is included as a special case. Thus, parabolic variational-hemivariational inequalities are special cases of the problem considered here. The extension of parabolic variational-hemivariational inequalities to the general class of multi-valued problems considered in this paper is not only of disciplinary interest, but is motivated by the need in applications. The main goals are as follows. First, we provide an existence theory for the above-stated problem under coercivity assumptions. Second, in the noncoercive case, we establish an appropriate sub-supersolution method that allows us to get existence, comparison, and enclosure results. Third, the order structure of the solution set enclosed by sub-supersolutions is revealed. In particular, it is shown that the solution set within the sector of sub-supersolutions is a directed set. As an application, a multi-valued parabolic obstacle problem is treated.

  4. Strange nonchaotic attractors for computation

    NASA Astrophysics Data System (ADS)

    Sathish Aravindh, M.; Venkatesan, A.; Lakshmanan, M.

    2018-05-01

    We investigate the response of quasiperiodically driven nonlinear systems exhibiting strange nonchaotic attractors (SNAs) to deterministic input signals. We show that if one uses two square waves in an aperiodic manner as input to a quasiperiodically driven double-well Duffing oscillator system, the response of the system can produce logical output controlled by such a forcing. Changing the threshold or biasing of the system changes the output to another logic operation and memory latch. The interplay of nonlinearity and quasiperiodic forcing yields logical behavior, and the emergent outcome of such a system is a logic gate. It is further shown that the logical behaviors persist even for an experimental noise floor. Thus the SNA turns out to be an efficient tool for computation.

  5. [Fuzzy logic in urology. How to reason in inaccurate terms].

    PubMed

    Vírseda Chamorro, Miguel; Salinas Casado, Jesus; Vázquez Alba, David

    2004-05-01

    The Occidental thinking is basically binary, based on opposites. The classic logic constitutes a systematization of these thinking. The methods of pure sciences such as physics are based on systematic measurement, analysis and synthesis. Nature is described by deterministic differential equations this way. Medical knowledge does not adjust well to deterministic equations of physics so that probability methods are employed. However, this method is not free of problems, both theoretical and practical, so that it is not often possible even to know with certainty the probabilities of most events. On the other hand, the application of binary logic to medicine in general, and to urology particularly, finds serious difficulties such as the imprecise character of the definition of most diseases and the uncertainty associated with most medical acts. These are responsible for the fact that many medical recommendations are made using a literary language which is inaccurate, inconsistent and incoherent. The blurred logic is a way of reasoning coherently using inaccurate concepts. This logic was proposed by Lofti Zadeh in 1965 and it is based in two principles: the theory of blurred conjuncts and the use of blurred rules. A blurred conjunct is one the elements of which have a degree of belonging between 0 and 1. Each blurred conjunct is associated with an inaccurate property or linguistic variable. Blurred rules use the principles of classic logic adapted to blurred conjuncts taking the degree of belonging of each element to the blurred conjunct of reference as the value of truth. Blurred logic allows to do coherent urologic recommendations (i.e. what patient is the performance of PSA indicated in?, what to do in the face of an elevated PSA?), or to perform diagnosis adapted to the uncertainty of diagnostic tests (e.g. data obtained from pressure flow studies in females).

  6. Room-Temperature Quantum Ballistic Transport in Monolithic Ultrascaled Al-Ge-Al Nanowire Heterostructures.

    PubMed

    Sistani, Masiar; Staudinger, Philipp; Greil, Johannes; Holzbauer, Martin; Detz, Hermann; Bertagnolli, Emmerich; Lugstein, Alois

    2017-08-09

    Conductance quantization at room temperature is a key requirement for the utilizing of ballistic transport for, e.g., high-performance, low-power dissipating transistors operating at the upper limit of "on"-state conductance or multivalued logic gates. So far, studying conductance quantization has been restricted to high-mobility materials at ultralow temperatures and requires sophisticated nanostructure formation techniques and precise lithography for contact formation. Utilizing a thermally induced exchange reaction between single-crystalline Ge nanowires and Al pads, we achieved monolithic Al-Ge-Al NW heterostructures with ultrasmall Ge segments contacted by self-aligned quasi one-dimensional crystalline Al leads. By integration in electrostatically modulated back-gated field-effect transistors, we demonstrate the first experimental observation of room temperature quantum ballistic transport in Ge, favorable for integration in complementary metal-oxide-semiconductor platform technology.

  7. The Bilinear Product Model of Hysteresis Phenomena

    NASA Astrophysics Data System (ADS)

    Kádár, György

    1989-01-01

    In ferromagnetic materials non-reversible magnetization processes are represented by rather complex hysteresis curves. The phenomenological description of such curves needs the use of multi-valued, yet unambiguous, deterministic functions. The history dependent calculation of consecutive Everett-integrals of the two-variable Preisach-function can account for the main features of hysteresis curves in uniaxial magnetic materials. The traditional Preisach model has recently been modified on the basis of population dynamics considerations, removing the non-real congruency property of the model. The Preisach-function was proposed to be a product of two factors of distinct physical significance: a magnetization dependent function taking into account the overall magnetization state of the body and a bilinear form of a single variable, magnetic field dependent, switching probability function. The most important statement of the bilinear product model is, that the switching process of individual particles is to be separated from the book-keeping procedure of their states. This empirical model of hysteresis can easily be extended to other irreversible physical processes, such as first order phase transitions.

  8. Disentangling the Cosmic Web with Lagrangian Submanifold

    NASA Astrophysics Data System (ADS)

    Shandarin, Sergei F.; Medvedev, Mikhail V.

    2016-10-01

    The Cosmic Web is a complicated highly-entangled geometrical object. Remarkably it has formed from practically Gaussian initial conditions, which may be regarded as the simplest departure from exactly uniform universe in purely deterministic mapping. The full complexity of the web is revealed neither in configuration no velocity spaces considered separately. It can be fully appreciated only in six-dimensional (6D) phase space. However, studies of the phase space is complicated by the fact that every projection of it on a three-dimensional (3D) space is multivalued and contained caustics. In addition phase space is not a metric space that complicates studies of geometry. We suggest to use Lagrangian submanifold i.e., x = x(q), where both x and q are 3D vectors instead of the phase space for studies the complexity of cosmic web in cosmological N-body dark matter simulations. Being fully equivalent in dynamical sense to the phase space it has an advantage of being a single valued and also metric space.

  9. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA

    PubMed Central

    Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114

  10. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA.

    PubMed

    Kim, HyunJin; Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.

  11. Single-photon non-linear optics with a quantum dot in a waveguide

    NASA Astrophysics Data System (ADS)

    Javadi, A.; Söllner, I.; Arcari, M.; Hansen, S. Lindskov; Midolo, L.; Mahmoodian, S.; Kiršanskė, G.; Pregnolato, T.; Lee, E. H.; Song, J. D.; Stobbe, S.; Lodahl, P.

    2015-10-01

    Strong non-linear interactions between photons enable logic operations for both classical and quantum-information technology. Unfortunately, non-linear interactions are usually feeble and therefore all-optical logic gates tend to be inefficient. A quantum emitter deterministically coupled to a propagating mode fundamentally changes the situation, since each photon inevitably interacts with the emitter, and highly correlated many-photon states may be created. Here we show that a single quantum dot in a photonic-crystal waveguide can be used as a giant non-linearity sensitive at the single-photon level. The non-linear response is revealed from the intensity and quantum statistics of the scattered photons, and contains contributions from an entangled photon-photon bound state. The quantum non-linearity will find immediate applications for deterministic Bell-state measurements and single-photon transistors and paves the way to scalable waveguide-based photonic quantum-computing architectures.

  12. Real-time logic modelling on SpaceWire

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Ma, Yunpeng; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. However, it cannot meet the deterministic requirement for safety/time critical application in spacecraft, where the delay of real-time (RT) message streams must be guaranteed. Therefore, SpaceWire-D is developed that provides deterministic delivery over a SpaceWire network. Formal analysis and verification of real-time systems is critical to their development and safe implementation, and is a prerequisite for obtaining their safety certification. Failure to meet specified timing constraints such as deadlines in hard real-time systems may lead to catastrophic results. In this paper, a formal verification method, Real-Time Logic (RTL), has been proposed to specify and verify timing properties of SpaceWire-D network. Based on the principal of SpaceWire-D protocol, we firstly analyze the timing properties of fundamental transactions, such as RMAP WRITE, and RMAP READ. After that, the RMAP WRITE transaction structure is modeled in Real-Time Logic (RTL) and Presburger Arithmetic representations. And then, the associated constraint graph and safety analysis is provided. Finally, it is suggested that RTL method can be useful for the protocol evaluation and provision of recommendation for further protocol evolutions.

  13. Reconfigurable logic via gate controlled domain wall trajectory in magnetic network structure

    PubMed Central

    Murapaka, C.; Sethi, P.; Goolaup, S.; Lew, W. S.

    2016-01-01

    An all-magnetic logic scheme has the advantages of being non-volatile and energy efficient over the conventional transistor based logic devices. In this work, we present a reconfigurable magnetic logic device which is capable of performing all basic logic operations in a single device. The device exploits the deterministic trajectory of domain wall (DW) in ferromagnetic asymmetric branch structure for obtaining different output combinations. The programmability of the device is achieved by using a current-controlled magnetic gate, which generates a local Oersted field. The field generated at the magnetic gate influences the trajectory of the DW within the structure by exploiting its inherent transverse charge distribution. DW transformation from vortex to transverse configuration close to the output branch plays a pivotal role in governing the DW chirality and hence the output. By simply switching the current direction through the magnetic gate, two universal logic gate functionalities can be obtained in this device. Using magnetic force microscopy imaging and magnetoresistance measurements, all basic logic functionalities are demonstrated. PMID:26839036

  14. Dynamic clustering detection through multi-valued descriptors of dermoscopic images.

    PubMed

    Cozza, Valentina; Guarracino, Maria Rosario; Maddalena, Lucia; Baroni, Adone

    2011-09-10

    This paper introduces a dynamic clustering methodology based on multi-valued descriptors of dermoscopic images. The main idea is to support medical diagnosis to decide if pigmented skin lesions belonging to an uncertain set are nearer to malignant melanoma or to benign nevi. Melanoma is the most deadly skin cancer, and early diagnosis is a current challenge for clinicians. Most data analysis algorithms for skin lesions discrimination focus on segmentation and extraction of features of categorical or numerical type. As an alternative approach, this paper introduces two new concepts: first, it considers multi-valued data that scalar variables not only describe but also intervals or histogram variables; second, it introduces a dynamic clustering method based on Wasserstein distance to compare multi-valued data. The overall strategy of analysis can be summarized into the following steps: first, a segmentation of dermoscopic images allows to identify a set of multi-valued descriptors; second, we performed a discriminant analysis on a set of images where there is an a priori classification so that it is possible to detect which features discriminate the benign and malignant lesions; and third, we performed the proposed dynamic clustering method on the uncertain cases, which need to be associated to one of the two previously mentioned groups. Results based on clinical data show that the grading of specific descriptors associated to dermoscopic characteristics provides a novel way to characterize uncertain lesions that can help the dermatologist's diagnosis. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Optimal entangling operations between deterministic blocks of qubits encoded into single photons

    NASA Astrophysics Data System (ADS)

    Smith, Jake A.; Kaplan, Lev

    2018-01-01

    Here, we numerically simulate probabilistic elementary entangling operations between rail-encoded photons for the purpose of scalable universal quantum computation or communication. We propose grouping logical qubits into single-photon blocks wherein single-qubit rotations and the controlled-not (cnot) gate are fully deterministic and simple to implement. Interblock communication is then allowed through said probabilistic entangling operations. We find a promising trend in the increasing probability of successful interblock communication as we increase the number of optical modes operated on by our elementary entangling operations.

  16. ModelPlex: Verified Runtime Validation of Verified Cyber-Physical System Models

    DTIC Science & Technology

    2014-07-01

    nondeterministic choice (〈∪〉), deterministic assignment (〈:=〉) and logical con- nectives (∧ r etc.) replace current facts with simpler ones or branch...By sequent proof rule ∃ r , this existentially quantified variable is instantiated with an arbitrary term θ, which is often a new logical variable...that is implicitly existentially quantified [27]. Weakening (Wr) removes facts that are no longer necessary. (〈∗〉) ∃X〈x :=X〉φ 〈x := ∗〉φ 1 (∃ r ) Γ ` φ(θ

  17. Method of developing all-optical trinary JK, D-type, and T-type flip-flops using semiconductor optical amplifiers.

    PubMed

    Garai, Sisir Kumar

    2012-04-10

    To meet the demand of very fast and agile optical networks, the optical processors in a network system should have a very fast execution rate, large information handling, and large information storage capacities. Multivalued logic operations and multistate optical flip-flops are the basic building blocks for such fast running optical computing and data processing systems. In the past two decades, many methods of implementing all-optical flip-flops have been proposed. Most of these suffer from speed limitations because of the low switching response of active devices. The frequency encoding technique has been used because of its many advantages. It can preserve its identity throughout data communication irrespective of loss of light energy due to reflection, refraction, attenuation, etc. The action of polarization-rotation-based very fast switching of semiconductor optical amplifiers increases processing speed. At the same time, tristate optical flip-flops increase information handling capacity.

  18. Rewriting Logic Semantics of a Plan Execution Language

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar A.; Rocha, Camilo

    2009-01-01

    The Plan Execution Interchange Language (PLEXIL) is a synchronous language developed by NASA to support autonomous spacecraft operations. In this paper, we propose a rewriting logic semantics of PLEXIL in Maude, a high-performance logical engine. The rewriting logic semantics is by itself a formal interpreter of the language and can be used as a semantic benchmark for the implementation of PLEXIL executives. The implementation in Maude has the additional benefit of making available to PLEXIL designers and developers all the formal analysis and verification tools provided by Maude. The formalization of the PLEXIL semantics in rewriting logic poses an interesting challenge due to the synchronous nature of the language and the prioritized rules defining its semantics. To overcome this difficulty, we propose a general procedure for simulating synchronous set relations in rewriting logic that is sound and, for deterministic relations, complete. We also report on the finding of two issues at the design level of the original PLEXIL semantics that were identified with the help of the executable specification in Maude.

  19. Periodic activation function and a modified learning algorithm for the multivalued neuron.

    PubMed

    Aizenberg, Igor

    2010-12-01

    In this paper, we consider a new periodic activation function for the multivalued neuron (MVN). The MVN is a neuron with complex-valued weights and inputs/output, which are located on the unit circle. Although the MVN outperforms many other neurons and MVN-based neural networks have shown their high potential, the MVN still has a limited capability of learning highly nonlinear functions. A periodic activation function, which is introduced in this paper, makes it possible to learn nonlinearly separable problems and non-threshold multiple-valued functions using a single multivalued neuron. We call this neuron a multivalued neuron with a periodic activation function (MVN-P). The MVN-Ps functionality is much higher than that of the regular MVN. The MVN-P is more efficient in solving various classification problems. A learning algorithm based on the error-correction rule for the MVN-P is also presented. It is shown that a single MVN-P can easily learn and solve those benchmark classification problems that were considered unsolvable using a single neuron. It is also shown that a universal binary neuron, which can learn nonlinearly separable Boolean functions, and a regular MVN are particular cases of the MVN-P.

  20. A random walk on water (Henry Darcy Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2009-04-01

    Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?

  1. Multiple positive solutions for a class of integral inclusions

    NASA Astrophysics Data System (ADS)

    Hong, Shihuang

    2008-04-01

    This paper deals with sufficient conditions for the existence of at least two positive solutions for a class of integral inclusions arising in the traffic theory. To show our main results, we apply a norm-type expansion and compression fixed point theorem for multivalued map due to Agarwal and O'Regan [A note on the existence of multiple fixed points for multivalued maps with applications, J. Differential Equation 160 (2000) 389-403].

  2. A solution to the biodiversity paradox by logical deterministic cellular automata.

    PubMed

    Kalmykov, Lev V; Kalmykov, Vyacheslav L

    2015-06-01

    The paradox of biological diversity is the key problem of theoretical ecology. The paradox consists in the contradiction between the competitive exclusion principle and the observed biodiversity. The principle is important as the basis for ecological theory. On a relatively simple model we show a mechanism of indefinite coexistence of complete competitors which violates the known formulations of the competitive exclusion principle. This mechanism is based on timely recovery of limiting resources and their spatio-temporal allocation between competitors. Because of limitations of the black-box modeling there was a problem to formulate the exclusion principle correctly. Our white-box multiscale model of two-species competition is based on logical deterministic individual-based cellular automata. This approach provides an automatic deductive inference on the basis of a system of axioms, and gives a direct insight into mechanisms of the studied system. It is one of the most promising methods of artificial intelligence. We reformulate and generalize the competitive exclusion principle and explain why this formulation provides a solution of the biodiversity paradox. In addition, we propose a principle of competitive coexistence.

  3. Resonant tunnelling and negative differential conductance in graphene transistors

    PubMed Central

    Britnell, L.; Gorbachev, R. V.; Geim, A. K.; Ponomarenko, L. A.; Mishchenko, A.; Greenaway, M. T.; Fromhold, T. M.; Novoselov, K. S.; Eaves, L.

    2013-01-01

    The chemical stability of graphene and other free-standing two-dimensional crystals means that they can be stacked in different combinations to produce a new class of functional materials, designed for specific device applications. Here we report resonant tunnelling of Dirac fermions through a boron nitride barrier, a few atomic layers thick, sandwiched between two graphene electrodes. The resonance occurs when the electronic spectra of the two electrodes are aligned. The resulting negative differential conductance in the device characteristics persists up to room temperature and is gate voltage-tuneable due to graphene’s unique Dirac-like spectrum. Although conventional resonant tunnelling devices comprising a quantum well sandwiched between two tunnel barriers are tens of nanometres thick, the tunnelling carriers in our devices cross only a few atomic layers, offering the prospect of ultra-fast transit times. This feature, combined with the multi-valued form of the device characteristics, has potential for applications in high-frequency and logic devices. PMID:23653206

  4. Polyhedral sweeping processes with unbounded nonconvex-valued perturbation

    NASA Astrophysics Data System (ADS)

    Tolstonogov, A. A.

    2017-12-01

    A polyhedral sweeping process with a multivalued perturbation whose values are nonconvex unbounded sets is studied in a separable Hilbert space. Polyhedral sweeping processes do not satisfy the traditional assumptions used to prove existence theorems for convex sweeping processes. We consider the polyhedral sweeping process as an evolution inclusion with subdifferential operators depending on time. The widely used assumption of Lipschitz continuity for the multivalued perturbation term is replaced by a weaker notion of (ρ - H) Lipschitzness. The existence of solutions is proved for this sweeping process.

  5. Methods for the computation of the multivalued Painlevé transcendents on their Riemann surfaces

    NASA Astrophysics Data System (ADS)

    Fasondini, Marco; Fornberg, Bengt; Weideman, J. A. C.

    2017-09-01

    We extend the numerical pole field solver (Fornberg and Weideman (2011) [12]) to enable the computation of the multivalued Painlevé transcendents, which are the solutions to the third, fifth and sixth Painlevé equations, on their Riemann surfaces. We display, for the first time, solutions to these equations on multiple Riemann sheets. We also provide numerical evidence for the existence of solutions to the sixth Painlevé equation that have pole-free sectors, known as tronquée solutions.

  6. Nanowire nanocomputer as a finite-state machine.

    PubMed

    Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F; Ellenbogen, James C; Lieber, Charles M

    2014-02-18

    Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom-up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future.

  7. Nanowire nanocomputer as a finite-state machine

    PubMed Central

    Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F.; Ellenbogen, James C.; Lieber, Charles M.

    2014-01-01

    Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom–up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future. PMID:24469812

  8. A white-box model of S-shaped and double S-shaped single-species population growth

    PubMed Central

    Kalmykov, Lev V.

    2015-01-01

    Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717

  9. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  10. Hardware implementation of fuzzy Petri net as a controller.

    PubMed

    Gniewek, Lesław; Kluska, Jacek

    2004-06-01

    The paper presents a new approach to fuzzy Petri net (FPN) and its hardware implementation. The authors' motivation is as follows. Complex industrial processes can be often decomposed into many parallelly working subprocesses, which can, in turn, be modeled using Petri nets. If all the process variables (or events) are assumed to be two-valued signals, then it is possible to obtain a hardware or software control device, which works according to the algorithm described by conventional Petri net. However, the values of real signals are contained in some bounded interval and can be interpreted as events which are not only true or false, but rather true in some degree from the interval [0, 1]. Such a natural interpretation from multivalued logic (fuzzy logic) point of view, concerns sensor outputs, control signals, time expiration, etc. It leads to the idea of FPN as a controller, which one can rather simply obtain, and which would be able to process both analog, and binary signals. In the paper both graphical, and algebraic representations of the proposed FPN are given. The conditions under which transitions can be fired are described. The algebraic description of the net and a theorem which enables computation of new marking in the net, based on current marking, are formulated. Hardware implementation of the FPN, which uses fuzzy JK flip-flops and fuzzy gates, are proposed. An example illustrating usefulness of the proposed FPN for control algorithm description and its synthesis as a controller device for the concrete production process are presented.

  11. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops

    NASA Astrophysics Data System (ADS)

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  12. Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.

    PubMed

    Rahman, Aminur; Jordan, Ian; Blackmore, Denis

    2018-01-01

    It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.

  13. A modified two-layer iteration via a boundary point approach to generalized multivalued pseudomonotone mixed variational inequalities.

    PubMed

    Saddeek, Ali Mohamed

    2017-01-01

    Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).

  14. Developmental logics: Brain science, child welfare, and the ethics of engagement in Japan.

    PubMed

    Goldfarb, Kathryn E

    2015-10-01

    This article explores the unintended consequences of the ways scholars and activists take up the science of child development to critique the Japanese child welfare system. Since World War II, Japan has depended on a system of child welfare institutions (baby homes and children's homes) to care for state wards. Opponents of institutional care advocate instead for family foster care and adoption, and cite international research on the developmental harms of institutionalizing newborns and young children during the "critical period" of the first few years. The "critical period" is understood as the time during which the caregiving a child receives shapes neurological development and later capacity to build interpersonal relationships. These discourses appear to press compellingly for system reform, the proof resting on seemingly objective knowledge about child development. However, scientific evidence of harm is often mobilized in tandem with arguments that the welfare system is rooted in Japanese culture, suggesting durability and resistance to change. Further, reform efforts that use universalizing child science as "proof" of the need for change are prone to slip into deterministic language that pathologizes the experiences of people who grew up in the system. This article explores the reasons why deterministic models of child development, rather than more open-ended models like neuroplasticity, dominate activist rhetorics. It proposes a concept, "ethics of engagement," to advocate for attention to multiple scales and domains through which interpersonal ties are experienced and embodied over time. Finally, it suggests the possibility of child welfare reform movements that take seriously the need for caring and transformative relationships throughout life, beyond the first "critical years," that do not require deterministic logics of permanent delay or damage. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Deterministic seismic hazard macrozonation of India

    NASA Astrophysics Data System (ADS)

    Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  16. Fuzzy logic of Aristotelian forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlovsky, L.I.

    1996-12-31

    Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less

  17. Cognitive foundations for model-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Weijers, Bertus; Mutz, Chris W.

    2003-08-01

    Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating "biological lessons" into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with emotional evaluation and overcomes the combinatorial complexity of concurrent fusion, tracking, and detection. The presentation will discuss examples of performance, where computational speedups of many orders of magnitude were attained leading to performance improvements of up to 10 dB (and better).

  18. Quantum logic using correlated one-dimensional quantum walks

    NASA Astrophysics Data System (ADS)

    Lahini, Yoav; Steinbrecher, Gregory R.; Bookatz, Adam D.; Englund, Dirk

    2018-01-01

    Quantum Walks are unitary processes describing the evolution of an initially localized wavefunction on a lattice potential. The complexity of the dynamics increases significantly when several indistinguishable quantum walkers propagate on the same lattice simultaneously, as these develop non-trivial spatial correlations that depend on the particle's quantum statistics, mutual interactions, initial positions, and the lattice potential. We show that even in the simplest case of a quantum walk on a one dimensional graph, these correlations can be shaped to yield a complete set of compact quantum logic operations. We provide detailed recipes for implementing quantum logic on one-dimensional quantum walks in two general cases. For non-interacting bosons—such as photons in waveguide lattices—we find high-fidelity probabilistic quantum gates that could be integrated into linear optics quantum computation schemes. For interacting quantum-walkers on a one-dimensional lattice—a situation that has recently been demonstrated using ultra-cold atoms—we find deterministic logic operations that are universal for quantum information processing. The suggested implementation requires minimal resources and a level of control that is within reach using recently demonstrated techniques. Further work is required to address error-correction.

  19. The Non-Signalling theorem in generalizations of Bell's theorem

    NASA Astrophysics Data System (ADS)

    Walleczek, J.; Grössing, G.

    2014-04-01

    Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.

  20. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    PubMed

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  1. Research in Atomic, Ionic and Photonic Systems for Scalable Deterministic Quantum Logic

    DTIC Science & Technology

    2005-11-17

    1. Ion Trap Project (DL, ANS, DS) Year 1 The “pushing gate” that we intend to use to entangle ions was thoroughly studied theoretically (milestone 1...allow more complex experimental sequences (e.g. Raman sideband cooling). We achieved important goals on the way to implementing an entangling gate in...for a two-ion entangling gate (in the method of [3]), we applied the same force to a single ion. When applied to a spin superposition state, the

  2. High efficiency coherent optical memory with warm rubidium vapour

    PubMed Central

    Hosseini, M.; Sparkes, B.M.; Campbell, G.; Lam, P.K.; Buchler, B.C.

    2011-01-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory. PMID:21285952

  3. High efficiency coherent optical memory with warm rubidium vapour.

    PubMed

    Hosseini, M; Sparkes, B M; Campbell, G; Lam, P K; Buchler, B C

    2011-02-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory.

  4. Clocked Magnetostriction-Assisted Spintronic Device Design and Simulation

    NASA Astrophysics Data System (ADS)

    Mousavi Iraei, Rouhollah; Kani, Nickvash; Dutta, Sourav; Nikonov, Dmitri E.; Manipatruni, Sasikanth; Young, Ian A.; Heron, John T.; Naeemi, Azad

    2018-05-01

    We propose a heterostructure device comprised of magnets and piezoelectrics that significantly improves the delay and the energy dissipation of an all-spin logic (ASL) device. This paper studies and models the physics of the device, illustrates its operation, and benchmarks its performance using SPICE simulations. We show that the proposed device maintains low voltage operation, non-reciprocity, non-volatility, cascadability, and thermal reliability of the original ASL device. Moreover, by utilizing the deterministic switching of a magnet from the saddle point of the energy profile, the device is more efficient in terms of energy and delay and is robust to thermal fluctuations. The results of simulations show that compared to ASL devices, the proposed device achieves 21x shorter delay and 27x lower energy dissipation per bit for a 32-bit arithmetic-logic unit (ALU).

  5. From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities

    NASA Astrophysics Data System (ADS)

    Kunjwal, Ravi; Spekkens, Robert W.

    2018-05-01

    The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.

  6. Cell-to-Cell Communication Circuits: Quantitative Analysis of Synthetic Logic Gates

    PubMed Central

    Hoffman-Sommer, Marta; Supady, Adriana; Klipp, Edda

    2012-01-01

    One of the goals in the field of synthetic biology is the construction of cellular computation devices that could function in a manner similar to electronic circuits. To this end, attempts are made to create biological systems that function as logic gates. In this work we present a theoretical quantitative analysis of a synthetic cellular logic-gates system, which has been implemented in cells of the yeast Saccharomyces cerevisiae (Regot et al., 2011). It exploits endogenous MAP kinase signaling pathways. The novelty of the system lies in the compartmentalization of the circuit where all basic logic gates are implemented in independent single cells that can then be cultured together to perform complex logic functions. We have constructed kinetic models of the multicellular IDENTITY, NOT, OR, and IMPLIES logic gates, using both deterministic and stochastic frameworks. All necessary model parameters are taken from literature or estimated based on published kinetic data, in such a way that the resulting models correctly capture important dynamic features of the included mitogen-activated protein kinase pathways. We analyze the models in terms of parameter sensitivity and we discuss possible ways of optimizing the system, e.g., by tuning the culture density. We apply a stochastic modeling approach, which simulates the behavior of whole populations of cells and allows us to investigate the noise generated in the system; we find that the gene expression units are the major sources of noise. Finally, the model is used for the design of system modifications: we show how the current system could be transformed to operate on three discrete values. PMID:22934039

  7. Picturing Data With Uncertainty

    NASA Technical Reports Server (NTRS)

    Kao, David; Love, Alison; Dungan, Jennifer L.; Pang, Alex

    2004-01-01

    NASA is in the business of creating maps for scientific purposes to represent important biophysical or geophysical quantities over space and time. For example, maps of surface temperature over the globe tell scientists where and when the Earth is heating up; regional maps of the greenness of vegetation tell scientists where and when plants are photosynthesizing. There is always uncertainty associated with each value in any such map due to various factors. When uncertainty is fully modeled, instead of a single value at each map location, there is a distribution expressing a set of possible outcomes at each location. We consider such distribution data as multi-valued data since it consists of a collection of values about a single variable. Thus, a multi-valued data represents both the map and its uncertainty. We have been working on ways to visualize spatial multi-valued data sets effectively for fields with regularly spaced units or grid cells such as those in NASA's Earth science applications. A new way to display distributions at multiple grid locations is to project the distributions from an individual row, column or other user-selectable straight transect from the 2D domain. First at each grid cell in a given slice (row, column or transect), we compute a smooth density estimate from the underlying data. Such a density estimate for the probability density function (PDF) is generally more useful than a histogram, which is a classic density estimate. Then, the collection of PDFs along a given slice are presented vertically above the slice and form a wall. To minimize occlusion of intersecting slices, the corresponding walls are positioned at the far edges of the boundary. The PDF wall depicts the shapes of the distributions very dearly since peaks represent the modes (or bumps) in the PDFs. We've defined roughness as the number of peaks in the distribution. Roughness is another useful summary information for multimodal distributions. The uncertainty of the multi-valued data can also be interpreted by the number of peaks and the widths of the peaks as shown by the PDF walls.

  8. A pilot level decision analysis of thermionic reactor development strategy for nuclear electric propulsion

    NASA Technical Reports Server (NTRS)

    Menke, M. M.; Judd, B. R.

    1973-01-01

    The development policy for thermionic reactors to provide electric propulsion and power for space exploration was analyzed to develop a logical procedure for selecting development alternatives that reflect the technical feasibility, JPL/NASA project objectives, and the economic environment of the project. The partial evolution of a decision model from the underlying philosophy of decision analysis to a deterministic pilot phase is presented, and the general manner in which this decision model can be employed to examine propulsion development alternatives is illustrated.

  9. Recognition of rotated images using the multi-valued neuron and rotation-invariant 2D Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio

    2012-03-01

    The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.

  10. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

  11. Thermodynamics of quasideterministic digital computers

    NASA Astrophysics Data System (ADS)

    Chu, Dominique

    2018-02-01

    A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.

  12. Markov Logic Networks in the Analysis of Genetic Data

    PubMed Central

    Sakhanenko, Nikita A.

    2010-01-01

    Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249

  13. High-performance gas sensors with temperature measurement

    PubMed Central

    Zhang, Yong; Li, Shengtao; Zhang, Jingyuan; Pan, Zhigang; Min, Daomin; Li, Xin; Song, Xiaoping; Liu, Junhua

    2013-01-01

    There are a number of gas ionization sensors using carbon nanotubes as cathode or anode. Unfortunately, their applications are greatly limited by their multi-valued sensitivity, one output value corresponding to several measured concentration values. Here we describe a triple-electrode structure featuring two electric fields with opposite directions, which enable us to overcome the multi-valued sensitivity problem at 1 atm in a wide range of gas concentrations. We used a carbon nanotube array as the first electrode, and the two electric fields between the upper and the lower interelectrode gaps were designed to extract positive ions generated in the upper gap, hence significantly reduced positive ion bombardment on the nanotube electrode, which allowed us to maintain a high electric field near the nanotube tips, leading to a single-valued sensitivity and a long nanotube life. We have demonstrated detection of various gases and simultaneously monitoring temperature, and a potential for applications. PMID:23405281

  14. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  15. Monotone viable trajectories for functional differential inclusions

    NASA Astrophysics Data System (ADS)

    Haddad, Georges

    This paper is a study on functional differential inclusions with memory which represent the multivalued version of retarded functional differential equations. The main result gives a necessary and sufficient equations. The main result gives a necessary and sufficient condition ensuring the existence of viable trajectories; that means trajectories remaining in a given nonempty closed convex set defined by given constraints the system must satisfy to be viable. Some motivations for this paper can be found in control theory where F( t, φ) = { f( t, φ, u)} uɛU is the set of possible velocities of the system at time t, depending on the past history represented by the function φ and on a control u ranging over a set U of controls. Other motivations can be found in planning procedures in microeconomics and in biological evolutions where problems with memory do effectively appear in a multivalued version. All these models require viability constraints represented by a closed convex set.

  16. Statistical Energy Analysis for Designers. Part 1. Basic Theory

    DTIC Science & Technology

    1974-09-01

    deterministic system. That is a possible answer, but it may not be the most useful one. The most glaring deficiency of SEA is its inability to deal with...present whether this represents the "next logical step" in the chain that we spoke of, but it bears examination. 13 A second deficiency of SEA is its...undamped string, p = lineal density, r = 0, and A=-T(D/ax) 2. Thus, Eq. (2.3.2) becomes Tk2 = pw2, or k = ±w/c , (2.3.3) where c=VT- is the speed of

  17. 77 FR 58828 - Northern Indiana Public Service Company; Notice of Petition for Declaratory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-24

    ... kV transmission line from the Reynolds substation to the Greentown substation and (2) substation upgrades at Reynolds substation, including a 765 kV/345kV transformer, a Multi-Value Project approved under...

  18. Hybrid quantum logic and a test of Bell's inequality using two different atomic isotopes.

    PubMed

    Ballance, C J; Schäfer, V M; Home, J P; Szwer, D J; Webster, S C; Allcock, D T C; Linke, N M; Harty, T P; Aude Craik, D P L; Stacey, D N; Steane, A M; Lucas, D M

    2015-12-17

    Entanglement is one of the most fundamental properties of quantum mechanics, and is the key resource for quantum information processing (QIP). Bipartite entangled states of identical particles have been generated and studied in several experiments, and post-selected or heralded entangled states involving pairs of photons, single photons and single atoms, or different nuclei in the solid state, have also been produced. Here we use a deterministic quantum logic gate to generate a 'hybrid' entangled state of two trapped-ion qubits held in different isotopes of calcium, perform full tomography of the state produced, and make a test of Bell's inequality with non-identical atoms. We use a laser-driven two-qubit gate, whose mechanism is insensitive to the qubits' energy splittings, to produce a maximally entangled state of one (40)Ca(+) qubit and one (43)Ca(+) qubit, held 3.5 micrometres apart in the same ion trap, with 99.8 ± 0.6 per cent fidelity. We test the CHSH (Clauser-Horne-Shimony-Holt) version of Bell's inequality for this novel entangled state and find that it is violated by 15 standard deviations; in this test, we close the detection loophole but not the locality loophole. Mixed-species quantum logic is a powerful technique for the construction of a quantum computer based on trapped ions, as it allows protection of memory qubits while other qubits undergo logic operations or are used as photonic interfaces to other processing units. The entangling gate mechanism used here can also be applied to qubits stored in different atomic elements; this would allow both memory and logic gate errors caused by photon scattering to be reduced below the levels required for fault-tolerant quantum error correction, which is an essential prerequisite for general-purpose quantum computing.

  19. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  20. Faithful deterministic secure quantum communication and authentication protocol based on hyperentanglement against collective noise

    NASA Astrophysics Data System (ADS)

    Chang, Yan; Zhang, Shi-Bin; Yan, Li-Li; Han, Gui-Hua

    2015-08-01

    Higher channel capacity and security are difficult to reach in a noisy channel. The loss of photons and the distortion of the qubit state are caused by noise. To solve these problems, in our study, a hyperentangled Bell state is used to design faithful deterministic secure quantum communication and authentication protocol over collective-rotation and collective-dephasing noisy channel, which doubles the channel capacity compared with using an ordinary Bell state as a carrier; a logical hyperentangled Bell state immune to collective-rotation and collective-dephasing noise is constructed. The secret message is divided into several parts to transmit, however the identity strings of Alice and Bob are reused. Unitary operations are not used. Project supported by the National Natural Science Foundation of China (Grant No. 61402058), the Science and Technology Support Project of Sichuan Province, China (Grant No. 2013GZX0137), the Fund for Young Persons Project of Sichuan Province, China (Grant No. 12ZB017), and the Foundation of Cyberspace Security Key Laboratory of Sichuan Higher Education Institutions, China (Grant No. szjj2014-074).

  1. Fault detection of helicopter gearboxes using the multi-valued influence matrix method

    NASA Technical Reports Server (NTRS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-01-01

    In this paper we investigate the effectiveness of a pattern classifying fault detection system that is designed to cope with the variability of fault signatures inherent in helicopter gearboxes. For detection, the measurements are monitored on-line and flagged upon the detection of abnormalities, so that they can be attributed to a faulty or normal case. As such, the detection system is composed of two components, a quantization matrix to flag the measurements, and a multi-valued influence matrix (MVIM) that represents the behavior of measurements during normal operation and at fault instances. Both the quantization matrix and influence matrix are tuned during a training session so as to minimize the error in detection. To demonstrate the effectiveness of this detection system, it was applied to vibration measurements collected from a helicopter gearbox during normal operation and at various fault instances. The results indicate that the MVIM method provides excellent results when the full range of faults effects on the measurements are included in the training set.

  2. Micro and MACRO Fractals Generated by Multi-Valued Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Banakh, T.; Novosad, N.

    2014-08-01

    Given a multi-valued function Φ : X \\mumap X on a topological space X we study the properties of its fixed fractal \\malteseΦ, which is defined as the closure of the orbit Φω(*Φ) = ⋃n∈ωΦn(*Φ) of the set *Φ = {x ∈ X : x ∈ Φ(x)} of fixed points of Φ. A special attention is paid to the duality between micro-fractals and macro-fractals, which are fixed fractals \\maltese Φ and \\maltese {Φ -1} for a contracting compact-valued function Φ : X \\mumap X on a complete metric space X. With help of algorithms (described in this paper) we generate various images of macro-fractals which are dual to some well-known micro-fractals like the fractal cross, the Sierpiński triangle, Sierpiński carpet, the Koch curve, or the fractal snowflakes. The obtained images show that macro-fractals have a large-scale fractal structure, which becomes clearly visible after a suitable zooming.

  3. Band structure effects on resonant tunneling in III-V quantum wells versus two-dimensional vertical heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip M., E-mail: philip.campbell@gatech.edu; Electronic Systems Laboratory, Georgia Tech Research Institute, Atlanta, Georgia 30332; Tarasov, Alexey

    Since the invention of the Esaki diode, resonant tunneling devices have been of interest for applications including multi-valued logic and communication systems. These devices are characterized by the presence of negative differential resistance in the current-voltage characteristic, resulting from lateral momentum conservation during the tunneling process. While a large amount of research has focused on III-V material systems, such as the GaAs/AlGaAs system, for resonant tunneling devices, poor device performance and device-to-device variability have limited widespread adoption. Recently, the symmetric field-effect transistor (symFET) was proposed as a resonant tunneling device incorporating symmetric 2-D materials, such as transition metal dichalcogenides (TMDs),more » separated by an interlayer barrier, such as hexagonal boron-nitride. The achievable peak-to-valley ratio for TMD symFETs has been predicted to be higher than has been observed for III-V resonant tunneling devices. This work examines the effect that band structure differences between III-V devices and TMDs has on device performance. It is shown that tunneling between the quantized subbands in III-V devices increases the valley current and decreases device performance, while the interlayer barrier height has a negligible impact on performance for barrier heights greater than approximately 0.5 eV.« less

  4. Automated Database Schema Design Using Mined Data Dependencies.

    ERIC Educational Resources Information Center

    Wong, S. K. M.; Butz, C. J.; Xiang, Y.

    1998-01-01

    Describes a bottom-up procedure for discovering multivalued dependencies in observed data without knowing a priori the relationships among the attributes. The proposed algorithm is an application of technique designed for learning conditional independencies in probabilistic reasoning; a prototype system for automated database schema design has…

  5. [Productive restructuring and the reallocation of work and employment: a survey of the "new" forms of social inequality].

    PubMed

    Marques, Ana Paula Pereira

    2013-06-01

    The scope of this paper is to question the inevitability of the processes of segmentation and increased precariousness of the relations of labor and employment, which are responsible for the introduction of "new" forms of social inequality that underpin the current model of development of economies and societies. It seeks to criticize the limits of global financial and economic logic, which constitute a "new spirit of capitalism," namely a kind of reverence for the natural order of things. It is therefore necessary to conduct an analytical survey of the ongoing changes in the labor market, accompanied by epistemological vigilance which makes it possible to see neoliberal (di)visions and dominant techno-deterministic theses in context. The enunciation of scenarios on the future of work will conclude this survey and will make it possible to draw attention to both the historical and temporal constraints and to the urgent need to unveil what is ideological and political in the prevailing logic of rationalization and processes to reinstate work and employment as a "central social experience" in contemporary times.

  6. Evolution of learning strategies in temporally and spatially variable environments: A review of theory

    PubMed Central

    Aoki, Kenichi; Feldman, Marcus W.

    2013-01-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change – coevolutionary, two-timescale, and information decay – are compared and shown to sometimes yield contradictory results. The so-called Rogers’ paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers’ paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. PMID:24211681

  7. Evolution of learning strategies in temporally and spatially variable environments: a review of theory.

    PubMed

    Aoki, Kenichi; Feldman, Marcus W

    2014-02-01

    The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change--coevolutionary, two-timescale, and information decay--are compared and shown to sometimes yield contradictory results. The so-called Rogers' paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers' paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Solution of the equations for one-dimensional, two-phase, immiscible flow by geometric methods

    NASA Astrophysics Data System (ADS)

    Boronin, Ivan; Shevlyakov, Andrey

    2018-03-01

    Buckley-Leverett equations describe non viscous, immiscible, two-phase filtration, which is often of interest in modelling of oil production. For many parameters and initial conditions, the solutions of these equations exhibit non-smooth behaviour, namely discontinuities in form of shock waves. In this paper we obtain a novel method for the solution of Buckley-Leverett equations, which is based on geometry of differential equations. This method is fast, accurate, stable, and describes non-smooth phenomena. The main idea of the method is that classic discontinuous solutions correspond to the continuous surfaces in the space of jets - the so-called multi-valued solutions (Bocharov et al., Symmetries and conservation laws for differential equations of mathematical physics. American Mathematical Society, Providence, 1998). A mapping of multi-valued solutions from the jet space onto the plane of the independent variables is constructed. This mapping is not one-to-one, and its singular points form a curve on the plane of the independent variables, which is called the caustic. The real shock occurs at the points close to the caustic and is determined by the Rankine-Hugoniot conditions.

  9. Medical privacy protection based on granular computing.

    PubMed

    Wang, Da-Wei; Liau, Churn-Jung; Hsu, Tsan-Sheng

    2004-10-01

    Based on granular computing methodology, we propose two criteria to quantitatively measure privacy invasion. The total cost criterion measures the effort needed for a data recipient to find private information. The average benefit criterion measures the benefit a data recipient obtains when he received the released data. These two criteria remedy the inadequacy of the deterministic privacy formulation proposed in Proceedings of Asia Pacific Medical Informatics Conference, 2000; Int J Med Inform 2003;71:17-23. Granular computing methodology provides a unified framework for these quantitative measurements and previous bin size and logical approaches. These two new criteria are implemented in a prototype system Cellsecu 2.0. Preliminary system performance evaluation is conducted and reviewed.

  10. Geometrization and Generalization of the Kowalevski Top

    NASA Astrophysics Data System (ADS)

    Dragović, Vladimir

    2010-08-01

    A new view on the Kowalevski top and the Kowalevski integration procedure is presented. For more than a century, the Kowalevski 1889 case, has attracted full attention of a wide community as the highlight of the classical theory of integrable systems. Despite hundreds of papers on the subject, the Kowalevski integration is still understood as a magic recipe, an unbelievable sequence of skillful tricks, unexpected identities and smart changes of variables. The novelty of our present approach is based on our four observations. The first one is that the so-called fundamental Kowalevski equation is an instance of a pencil equation of the theory of conics which leads us to a new geometric interpretation of the Kowalevski variables w, x 1, x 2 as the pencil parameter and the Darboux coordinates, respectively. The second is observation of the key algebraic property of the pencil equation which is followed by introduction and study of a new class of discriminantly separable polynomials. All steps of the Kowalevski integration procedure are now derived as easy and transparent logical consequences of our theory of discriminantly separable polynomials. The third observation connects the Kowalevski integration and the pencil equation with the theory of multi-valued groups. The Kowalevski change of variables is now recognized as an example of a two-valued group operation and its action. The final observation is surprising equivalence of the associativity of the two-valued group operation and its action to the n = 3 case of the Great Poncelet Theorem for pencils of conics.

  11. The Teacher in a Multivalue Society.

    ERIC Educational Resources Information Center

    Shaver, James P.

    Given the general recognition that what we do is influenced as much or more by our value commitments as by our factual knowledge, it is ironic that social studies, the area of the curriculum supposedly focused on citizenship education, has paid so little attention to values. There are many reasons for this, but one of them, the author believes, is…

  12. Data Aggregation in Multi-Agent Systems in the Presence of Hybrid Faults

    ERIC Educational Resources Information Center

    Srinivasan, Satish Mahadevan

    2010-01-01

    Data Aggregation (DA) is a set of functions that provide components of a distributed system access to global information for purposes of network management and user services. With the diverse new capabilities that networks can provide, applicability of DA is growing. DA is useful in dealing with multi-value domain information and often requires…

  13. On an Integral with Two Branch Points

    ERIC Educational Resources Information Center

    de Oliveira, E. Capelas; Chiacchio, Ary O.

    2006-01-01

    The paper considers a class of real integrals performed by using a convenient integral in the complex plane. A complex integral containing a multi-valued function with two branch points is transformed into another integral containing a pole and a unique branch point. As a by-product we obtain a new class of integrals which can be calculated in a…

  14. Sensitivity Analysis for Multivalued Treatment Effects: An Example of a Cross-Country Study of Teacher Participation and Job Satisfaction

    ERIC Educational Resources Information Center

    Chang, Chi

    2015-01-01

    It is known that interventions are hard to assign randomly to subjects in social psychological studies, because randomized control is difficult to implement strictly and precisely. Thus, in nonexperimental studies and observational studies, controlling the impact of covariates on the dependent variables and addressing the robustness of the…

  15. Deterministic quantum teleportation with feed-forward in a solid state system.

    PubMed

    Steffen, L; Salathe, Y; Oppliger, M; Kurpiers, P; Baur, M; Lang, C; Eichler, C; Puebla-Hellmann, G; Fedorov, A; Wallraff, A

    2013-08-15

    Engineered macroscopic quantum systems based on superconducting electronic circuits are attractive for experimentally exploring diverse questions in quantum information science. At the current state of the art, quantum bits (qubits) are fabricated, initialized, controlled, read out and coupled to each other in simple circuits. This enables the realization of basic logic gates, the creation of complex entangled states and the demonstration of algorithms or error correction. Using different variants of low-noise parametric amplifiers, dispersive quantum non-demolition single-shot readout of single-qubit states with high fidelity has enabled continuous and discrete feedback control of single qubits. Here we realize full deterministic quantum teleportation with feed-forward in a chip-based superconducting circuit architecture. We use a set of two parametric amplifiers for both joint two-qubit and individual qubit single-shot readout, combined with flexible real-time digital electronics. Our device uses a crossed quantum bus technology that allows us to create complex networks with arbitrary connecting topology in a planar architecture. The deterministic teleportation process succeeds with order unit probability for any input state, as we prepare maximally entangled two-qubit states as a resource and distinguish all Bell states in a single two-qubit measurement with high efficiency and high fidelity. We teleport quantum states between two macroscopic systems separated by 6 mm at a rate of 10(4) s(-1), exceeding other reported implementations. The low transmission loss of superconducting waveguides is likely to enable the range of this and other schemes to be extended to significantly larger distances, enabling tests of non-locality and the realization of elements for quantum communication at microwave frequencies. The demonstrated feed-forward may also find application in error correction schemes.

  16. A method of transition conflict resolving in hierarchical control

    NASA Astrophysics Data System (ADS)

    Łabiak, Grzegorz

    2016-09-01

    The paper concerns the problem of automatic solving of transition conflicts in hierarchical concurrent state machines (also known as UML state machine). Preparing by the designer a formal specification of a behaviour free from conflicts can be very complex. In this paper, it is proposed a method for solving conflicts through transition predicates modification. Partially specified predicates in the nondeterministic diagram are transformed into a symbolic Boolean space, whose points of the space code all possible valuations of transition predicates. Next, all valuations under partial specifications are logically multiplied by a function which represents all possible orthogonal predicate valuations. The result of this operation contains all possible collections of predicates, which under given partial specification make that the original diagram is conflict free and deterministic.

  17. Generation of an arbitrary concatenated Greenberger-Horne-Zeilinger state with single photons

    NASA Astrophysics Data System (ADS)

    Chen, Shan-Shan; Zhou, Lan; Sheng, Yu-Bo

    2017-02-01

    The concatenated Greenberger-Horne-Zeilinger (C-GHZ) state is a new kind of logic-qubit entangled state, which may have extensive applications in future quantum communication. In this letter, we propose a protocol for constructing an arbitrary C-GHZ state with single photons. We exploit the cross-Kerr nonlinearity for this purpose. This protocol has some advantages over previous protocols. First, it only requires two kinds of cross-Kerr nonlinearities to generate single phase shifts  ±θ. Second, it is not necessary to use sophisticated m-photon Toffoli gates. Third, this protocol is deterministic and can be used to generate an arbitrary C-GHZ state. This protocol may be useful in future quantum information processing based on the C-GHZ state.

  18. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  19. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE PAGES

    Maljovec, D.; Liu, S.; Wang, B.; ...

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  20. Single-photon three-qubit quantum logic using spatial light modulators.

    PubMed

    Kagalwala, Kumel H; Di Giuseppe, Giovanni; Abouraddy, Ayman F; Saleh, Bahaa E A

    2017-09-29

    The information-carrying capacity of a single photon can be vastly expanded by exploiting its multiple degrees of freedom: spatial, temporal, and polarization. Although multiple qubits can be encoded per photon, to date only two-qubit single-photon quantum operations have been realized. Here, we report an experimental demonstration of three-qubit single-photon, linear, deterministic quantum gates that exploit photon polarization and the two-dimensional spatial-parity-symmetry of the transverse single-photon field. These gates are implemented using a polarization-sensitive spatial light modulator that provides a robust, non-interferometric, versatile platform for implementing controlled unitary gates. Polarization here represents the control qubit for either separable or entangling unitary operations on the two spatial-parity target qubits. Such gates help generate maximally entangled three-qubit Greenberger-Horne-Zeilinger and W states, which is confirmed by tomographical reconstruction of single-photon density matrices. This strategy provides access to a wide range of three-qubit states and operations for use in few-qubit quantum information processing protocols.Photons are essential for quantum information processing, but to date only two-qubit single-photon operations have been realized. Here the authors demonstrate experimentally a three-qubit single-photon linear deterministic quantum gate by exploiting polarization along with spatial-parity symmetry.

  1. Universal quantum computation using all-optical hybrid encoding

    NASA Astrophysics Data System (ADS)

    Guo, Qi; Cheng, Liu-Yong; Wang, Hong-Fu; Zhang, Shou

    2015-04-01

    By employing displacement operations, single-photon subtractions, and weak cross-Kerr nonlinearity, we propose an alternative way of implementing several universal quantum logical gates for all-optical hybrid qubits encoded in both single-photon polarization state and coherent state. Since these schemes can be straightforwardly implemented only using local operations without teleportation procedure, therefore, less physical resources and simpler operations are required than the existing schemes. With the help of displacement operations, a large phase shift of the coherent state can be obtained via currently available tiny cross-Kerr nonlinearity. Thus, all of these schemes are nearly deterministic and feasible under current technology conditions, which makes them suitable for large-scale quantum computing. Project supported by the National Natural Science Foundation of China (Grant Nos. 61465013, 11465020, and 11264042).

  2. The EPR paradox, Bell's inequality, and the question of locality

    NASA Astrophysics Data System (ADS)

    Blaylock, Guy

    2010-01-01

    Most physicists agree that the Einstein-Podolsky-Rosen-Bell paradox exemplifies much of the strange behavior of quantum mechanics, but argument persists about what assumptions underlie the paradox. To clarify what the debate is about, we employ a simple and well-known thought experiment involving two correlated photons to help us focus on the logical assumptions needed to construct the EPR and Bell arguments. The view presented in this paper is that the minimal assumptions behind Bell's inequality are locality and counterfactual definiteness but not scientific realism, determinism, or hidden variables as are often suggested. We further examine the resulting constraints on physical theory with an illustration from the many-worlds interpretation of quantum mechanics—an interpretation that we argue is deterministic, local, and realist but that nonetheless violates the Bell inequality.

  3. Safety Evaluation of an Automated Remote Monitoring System for Heart Failure in an Urban, Indigent Population.

    PubMed

    Gross-Schulman, Sandra; Sklaroff, Laura Myerchin; Hertz, Crystal Coyazo; Guterman, Jeffrey J

    2017-12-01

    Heart Failure (HF) is the most expensive preventable condition, regardless of patient ethnicity, race, socioeconomic status, sex, and insurance status. Remote telemonitoring with timely outpatient care can significantly reduce avoidable HF hospitalizations. Human outreach, the traditional method used for remote monitoring, is effective but costly. Automated systems can potentially provide positive clinical, fiscal, and satisfaction outcomes in chronic disease monitoring. The authors implemented a telephonic HF automated remote monitoring system that utilizes deterministic decision tree logic to identify patients who are at risk of clinical decompensation. This safety study evaluated the degree of clinical concordance between the automated system and traditional human monitoring. This study focused on a broad underserved population and demonstrated a safe, reliable, and inexpensive method of monitoring patients with HF.

  4. Teleportation of a Toffoli gate among distant solid-state qubits with quantum dots embedded in optical microcavities

    PubMed Central

    Hu, Shi; Cui, Wen-Xue; Wang, Dong-Yang; Bai, Cheng-Hua; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-01-01

    Teleportation of unitary operations can be viewed as a quantum remote control. The remote realization of robust multiqubit logic gates among distant long-lived qubit registers is a key challenge for quantum computation and quantum information processing. Here we propose a simple and deterministic scheme for teleportation of a Toffoli gate among three spatially separated electron spin qubits in optical microcavities by using local linear optical operations, an auxiliary electron spin, two circularly-polarized entangled photon pairs, photon measurements, and classical communication. We assess the feasibility of the scheme and show that the scheme can be achieved with high average fidelity under the current technology. The scheme opens promising perspectives for constructing long-distance quantum communication and quantum computation networks with solid-state qubits. PMID:26225781

  5. Teleportation of a Toffoli gate among distant solid-state qubits with quantum dots embedded in optical microcavities.

    PubMed

    Hu, Shi; Cui, Wen-Xue; Wang, Dong-Yang; Bai, Cheng-Hua; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-07-30

    Teleportation of unitary operations can be viewed as a quantum remote control. The remote realization of robust multiqubit logic gates among distant long-lived qubit registers is a key challenge for quantum computation and quantum information processing. Here we propose a simple and deterministic scheme for teleportation of a Toffoli gate among three spatially separated electron spin qubits in optical microcavities by using local linear optical operations, an auxiliary electron spin, two circularly-polarized entangled photon pairs, photon measurements, and classical communication. We assess the feasibility of the scheme and show that the scheme can be achieved with high average fidelity under the current technology. The scheme opens promising perspectives for constructing long-distance quantum communication and quantum computation networks with solid-state qubits.

  6. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  7. Parallel Photonic Quantum Computation Assisted by Quantum Dots in One-Side Optical Microcavities

    PubMed Central

    Luo, Ming-Xing; Wang, Xiaojun

    2014-01-01

    Universal quantum logic gates are important elements for a quantum computer. In contrast to previous constructions on one degree of freedom (DOF) of quantum systems, we investigate the possibility of parallel quantum computations dependent on two DOFs of photon systems. We construct deterministic hyper-controlled-not (hyper-CNOT) gates operating on the spatial-mode and the polarization DOFs of two-photon or one-photon systems by exploring the giant optical circular birefringence induced by quantum-dot spins in one-sided optical microcavities. These hyper-CNOT gates show that the quantum states of two DOFs can be viewed as independent qubits without requiring auxiliary DOFs in theory. This result can reduce the quantum resources by half for quantum applications with large qubit systems, such as the quantum Shor algorithm. PMID:25030424

  8. Parallel photonic quantum computation assisted by quantum dots in one-side optical microcavities.

    PubMed

    Luo, Ming-Xing; Wang, Xiaojun

    2014-07-17

    Universal quantum logic gates are important elements for a quantum computer. In contrast to previous constructions on one degree of freedom (DOF) of quantum systems, we investigate the possibility of parallel quantum computations dependent on two DOFs of photon systems. We construct deterministic hyper-controlled-not (hyper-CNOT) gates operating on the spatial-mode and the polarization DOFs of two-photon or one-photon systems by exploring the giant optical circular birefringence induced by quantum-dot spins in one-sided optical microcavities. These hyper-CNOT gates show that the quantum states of two DOFs can be viewed as independent qubits without requiring auxiliary DOFs in theory. This result can reduce the quantum resources by half for quantum applications with large qubit systems, such as the quantum Shor algorithm.

  9. fissioncore: A desktop-computer simulation of a fission-bomb core

    NASA Astrophysics Data System (ADS)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  10. Dynamics and Stability of Acoustic Wavefronts in the Ocean

    DTIC Science & Technology

    2012-09-30

    has been developed to solve the eikonal equation and calculate wavefront and ray trajectory displacements, which are required to be small over a...solution of the eikonal equation lies in the eikonal (and acoustic travel time) being a multi-valued function of position. A number of computational...approaches to solve the eikonal equation without ray tracing have been developed in mathematical and seismological communities (Vidale, 1990; Sava and

  11. Dynamics and Stability of Acoustic Wavefronts in the Ocean

    DTIC Science & Technology

    2011-09-01

    developed to solve the eikonal equation and calculate wavefront and ray trajectory displacements, which are required to be small over a correlation length...with a direct modeling of acoustic wavefronts in the ocean through numerical solution of the eikonal equation lies in the eikonal (and acoustic...travel time) being a multi-valued function of position. A number of computational approaches to solve the eikonal equation without ray tracing have been

  12. RMS roughness-independent tuning of surface wettability by tailoring silver nanoparticles with a fluorocarbon plasma polymer.

    PubMed

    Choukourov, A; Kylián, O; Petr, M; Vaidulych, M; Nikitin, D; Hanuš, J; Artemenko, A; Shelemin, A; Gordeev, I; Kolská, Z; Solař, P; Khalakhan, I; Ryabov, A; Májek, J; Slavínská, D; Biederman, H

    2017-02-16

    A layer of 14 nm-sized Ag nanoparticles undergoes complex transformation when overcoated by thin films of a fluorocarbon plasma polymer. Two regimes of surface evolution are identified, both with invariable RMS roughness. In the early regime, the plasma polymer penetrates between and beneath the nanoparticles, raising them above the substrate and maintaining the multivalued character of the surface roughness. The growth (β) and the dynamic (1/z) exponents are close to zero and the interface bears the features of self-affinity. The presence of inter-particle voids leads to heterogeneous wetting with an apparent water contact angle θ a = 135°. The multivalued nanotopography results in two possible positions for the water droplet meniscus, yet strong water adhesion indicates that the meniscus is located at the lower part of the spherical nanofeatures. In the late regime, the inter-particle voids become filled and the interface acquires a single valued character. The plasma polymer proceeds to grow on the thus-roughened surface whereas the nanoparticles keep emerging away from the substrate. The RMS roughness remains invariable and lateral correlations propagate with 1/z = 0.27. The surface features multiaffinity which is given by different evolution of length scales associated with the nanoparticles and with the plasma polymer. The wettability turns to the homogeneous wetting state.

  13. D-VASim: an interactive virtual laboratory environment for the simulation and analysis of genetic circuits.

    PubMed

    Baig, Hasan; Madsen, Jan

    2017-01-15

    Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  15. A photon-photon quantum gate based on a single atom in an optical resonator.

    PubMed

    Hacker, Bastian; Welte, Stephan; Rempe, Gerhard; Ritter, Stephan

    2016-08-11

    That two photons pass each other undisturbed in free space is ideal for the faithful transmission of information, but prohibits an interaction between the photons. Such an interaction is, however, required for a plethora of applications in optical quantum information processing. The long-standing challenge here is to realize a deterministic photon-photon gate, that is, a mutually controlled logic operation on the quantum states of the photons. This requires an interaction so strong that each of the two photons can shift the other's phase by π radians. For polarization qubits, this amounts to the conditional flipping of one photon's polarization to an orthogonal state. So far, only probabilistic gates based on linear optics and photon detectors have been realized, because "no known or foreseen material has an optical nonlinearity strong enough to implement this conditional phase shift''. Meanwhile, tremendous progress in the development of quantum-nonlinear systems has opened up new possibilities for single-photon experiments. Platforms range from Rydberg blockade in atomic ensembles to single-atom cavity quantum electrodynamics. Applications such as single-photon switches and transistors, two-photon gateways, nondestructive photon detectors, photon routers and nonlinear phase shifters have been demonstrated, but none of them with the ideal information carriers: optical qubits in discriminable modes. Here we use the strong light-matter coupling provided by a single atom in a high-finesse optical resonator to realize the Duan-Kimble protocol of a universal controlled phase flip (π phase shift) photon-photon quantum gate. We achieve an average gate fidelity of (76.2 ± 3.6) per cent and specifically demonstrate the capability of conditional polarization flipping as well as entanglement generation between independent input photons. This photon-photon quantum gate is a universal quantum logic element, and therefore could perform most existing two-photon operations. The demonstrated feasibility of deterministic protocols for the optical processing of quantum information could lead to new applications in which photons are essential, especially long-distance quantum communication and scalable quantum computing.

  16. Short Note on Complexity of Multi-Value Byzantine Agreement

    DTIC Science & Technology

    2010-07-27

    which lead to nBl /D bits over the whole algorithm. Broadcasts in extended step: In the extended step, every node broadcasts D bits. Thus nDB bits...bits, as: (n− 1)l + n(n− 1)(k +D/k)l/D + nBl /D + nDBt(t+ 1) (4) = (n− 1)l +O(n2kl/D + n2l/k + nBl /D + n3BD). (5) Notice that broadcast algorithm of

  17. A β-Ta system for current induced magnetic switching in the absence of external magnetic field

    NASA Astrophysics Data System (ADS)

    Chen, Wenzhe; Qian, Lijuan; Xiao, Gang

    2018-05-01

    Magnetic switching via Giant Spin Hall Effect (GSHE) has received great interest for its role in developing future spintronics logic or memory devices. In this work, a new material system (i.e. a transition metal sandwiched between two ferromagnetic layers) with interlayer exchange coupling is introduced to realize the deterministic field-free perpendicular magnetic switching. This system uses β-Ta, as the GSHE agent to generate a spin current and as the interlayer exchange coupling medium to generate an internal field. The critical switching current density at zero field is on the order of 106 A/cm2 due to the large spin Hall angle of β-Ta. The internal field, along with switching efficiency, depends strongly on the orthogonal magnetization states of two ferromagnetic coupling layers in this system.

  18. Optical π phase shift created with a single-photon pulse.

    PubMed

    Tiarks, Daniel; Schmidt, Steffen; Rempe, Gerhard; Dürr, Stephan

    2016-04-01

    A deterministic photon-photon quantum logic gate is a long-standing goal. Building such a gate becomes possible if a light pulse containing only one photon imprints a phase shift of π onto another light field. We experimentally demonstrate the generation of such a π phase shift with a single-photon pulse. A first light pulse containing less than one photon on average is stored in an atomic gas. Rydberg blockade combined with electromagnetically induced transparency creates a phase shift for a second light pulse, which propagates through the medium. We measure the π phase shift of the second pulse when we postselect the data upon the detection of a retrieved photon from the first pulse. This demonstrates a crucial step toward a photon-photon gate and offers a variety of applications in the field of quantum information processing.

  19. Elements of decisional dynamics: An agent-based approach applied to artificial financial market

    NASA Astrophysics Data System (ADS)

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2018-02-01

    This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).

  20. Elements of decisional dynamics: An agent-based approach applied to artificial financial market.

    PubMed

    Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille

    2018-02-01

    This paper introduces an original mathematical description for describing agents' decision-making process in the case of problems affected by both individual and collective behaviors in systems characterized by nonlinear, path dependent, and self-organizing interactions. An application to artificial financial markets is proposed by designing a multi-agent system based on the proposed formalization. In this application, agents' decision-making process is based on fuzzy logic rules and the price dynamics is purely deterministic according to the basic matching rules of a central order book. Finally, while putting most parameters under evolutionary control, the computational agent-based system is able to replicate several stylized facts of financial time series (distributions of stock returns showing a heavy tail with positive excess kurtosis, absence of autocorrelations in stock returns, and volatility clustering phenomenon).

  1. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  2. Stochastic p -Bits for Invertible Logic

    NASA Astrophysics Data System (ADS)

    Camsari, Kerem Yunus; Faria, Rafatul; Sutton, Brian M.; Datta, Supriyo

    2017-07-01

    Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40 - 60 kT . In this paper, we show that unstable, stochastic units, which we call "p -bits," can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p -bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p -bits, showing that p -bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM) with a symmetric connection matrix [J ] (Ji j=Jj i) that implements a given truth table with p -bits. The [J ] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (Ji j≠Jj i) to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p -bits get precisely correlated such that the correct answer out of 233 (≈8 ×1 09) possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect directivity (Jj i=0 ) a small number of samples is enough, while for less directed connections more samples are needed, but even in the former case logical invertibility is largely preserved. This combination of digital accuracy and logical invertibility is enabled by the hybrid design that uses bidirectional BM units to construct circuits with partially directed interunit connections. We establish this key result with extensive examples including a 4-bit multiplier which in inverted mode functions as a factorizer.

  3. N-state random switching based on quantum tunnelling

    NASA Astrophysics Data System (ADS)

    Bernardo Gavito, Ramón; Jiménez Urbanos, Fernando; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J.; Woodhead, Christopher S.; Missous, Mohamed; Roedig, Utz; Young, Robert J.

    2017-08-01

    In this work, we show how the hysteretic behaviour of resonant tunnelling diodes (RTDs) can be exploited for new functionalities. In particular, the RTDs exhibit a stochastic 2-state switching mechanism that could be useful for random number generation and cryptographic applications. This behaviour can be scaled to N-bit switching, by connecting various RTDs in series. The InGaAs/AlAs RTDs used in our experiments display very sharp negative differential resistance (NDR) peaks at room temperature which show hysteresis cycles that, rather than having a fixed switching threshold, show a probability distribution about a central value. We propose to use this intrinsic uncertainty emerging from the quantum nature of the RTDs as a source of randomness. We show that a combination of two RTDs in series results in devices with three-state outputs and discuss the possibility of scaling to N-state devices by subsequent series connections of RTDs, which we demonstrate for the up to the 4-state case. In this work, we suggest using that the intrinsic uncertainty in the conduction paths of resonant tunnelling diodes can behave as a source of randomness that can be integrated into current electronics to produce on-chip true random number generators. The N-shaped I-V characteristic of RTDs results in a two-level random voltage output when driven with current pulse trains. Electrical characterisation and randomness testing of the devices was conducted in order to determine the validity of the true randomness assumption. Based on the results obtained for the single RTD case, we suggest the possibility of using multi-well devices to generate N-state random switching devices for their use in random number generation or multi-valued logic devices.

  4. Memory Applications Using Resonant Tunneling Diodes

    NASA Astrophysics Data System (ADS)

    Shieh, Ming-Huei

    Resonant tunneling diodes (RTDs) producing unique folding current-voltage (I-V) characteristics have attracted considerable research attention due to their promising application in signal processing and multi-valued logic. The negative differential resistance of RTDs renders the operating points self-latching and stable. We have proposed a multiple -dimensional multiple-state RTD-based static random-access memory (SRAM) cell in which the number of stable states can significantly be increased to (N + 1)^ m or more for m number of N-peak RTDs connected in series. The proposed cells take advantage of the hysteresis and folding I-V characteristics of RTD. Several cell designs are presented and evaluated. A two-dimensional nine-state memory cell has been implemented and demonstrated by a breadboard circuit using two 2-peak RTDs. The hysteresis phenomenon in a series of RTDs is also further analyzed. The switch model provided in SPICE 3 can be utilized to simulate the hysteretic I-V characteristics of RTDs. A simple macro-circuit is described to model the hysteretic I-V characteristic of RTD for circuit simulation. A new scheme for storing word-wide multiple-bit information very efficiently in a single memory cell using RTDs is proposed. An efficient and inexpensive periphery circuit to read from and write into the cell is also described. Simulation results on the design of a 3-bit memory cell scheme using one-peak RTDs are also presented. Finally, a binary transistor-less memory cell which is only composed of a pair of RTDs and an ordinary rectifier diode is presented and investigated. A simple means for reading and writing information from or into the memory cell is also discussed.

  5. Sensitivity analysis of primary resonances and bifurcations of a controlled piecewise-smooth system with negative stiffness

    NASA Astrophysics Data System (ADS)

    Huang, Dongmei; Xu, Wei

    2017-11-01

    In this paper, the combination of the cubic nonlinearity and time delay is proposed to improve the performance of a piecewise-smooth (PWS) system with negative stiffness. Dynamical properties, feedback control performance and symmetry-breaking bifurcation are mainly considered for a PWS system with negative stiffness under nonlinear position and velocity feedback control. For the free vibration system, the homoclinic-like orbits are firstly derived. Then, the amplitude-frequency response of the controlled system is obtained analytically in aspect of the Lindstedt-Poincaré method and the method of multiple scales, which is also verified through the numerical results. In this regard, a softening-type behavior, which directly leads to the multi-valued responses, is illustrated over the negative position feedback. Especially, the five-valued responses in which three branches of them are stable are found. And complex multi-valued characteristics are also observed in the force-amplitude responses. Furthermore, for explaining the effectiveness of feedback control, the equivalent damping and stiffness are also introduced. Sensitivity of the system response to the feedback gain and time delay is comprehensively considered and interesting dynamical properties are found. Relatively, from the perspective of suppressing the maximum amplitude and controlling the resonance stability, the selection of the feedback parameters is discussed. Finally, the symmetry-breaking bifurcation and chaotic motion are considered.

  6. MV-OPES: Multivalued-Order Preserving Encryption Scheme: A Novel Scheme for Encrypting Integer Value to Many Different Values

    NASA Astrophysics Data System (ADS)

    Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki

    Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.

  7. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  8. Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.

    PubMed

    Wallace, Rodrick

    2015-06-01

    A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.

  9. Maximal incompatibility of locally classical behavior and global causal order in multiparty scenarios

    NASA Astrophysics Data System (ADS)

    Baumeler, ńmin; Feix, Adrien; Wolf, Stefan

    2014-10-01

    Quantum theory in a global spacetime gives rise to nonlocal correlations, which cannot be explained causally in a satisfactory way; this motivates the study of theories with reduced global assumptions. Oreshkov, Costa, and Brukner [Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076] proposed a framework in which quantum theory is valid locally but where, at the same time, no global spacetime, i.e., predefined causal order, is assumed beyond the absence of logical paradoxes. It was shown for the two-party case, however, that a global causal order always emerges in the classical limit. Quite naturally, it has been conjectured that the same also holds in the multiparty setting. We show that, counter to this belief, classical correlations locally compatible with classical probability theory exist that allow for deterministic signaling between three or more parties incompatible with any predefined causal order.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey-Collard, Patrick; Jacobson, N. Tobias; Rudolph, Martin

    Individual donors in silicon chips are used as quantum bits with extremely low error rates. However, physical realizations have been limited to one donor because their atomic size causes fabrication challenges. Quantum dot qubits, in contrast, are highly adjustable using electrical gate voltages. This adjustability could be leveraged to deterministically couple donors to quantum dots in arrays of qubits. In this work, we demonstrate the coherent interaction of a 31P donor electron with the electron of a metal-oxide-semiconductor quantum dot. We form a logical qubit encoded in the spin singlet and triplet states of the two-electron system. We show thatmore » the donor nuclear spin drives coherent rotations between the electronic qubit states through the contact hyperfine interaction. This provides every key element for compact two-electron spin qubits requiring only a single dot and no additional magnetic field gradients, as well as a means to interact with the nuclear spin qubit.« less

  11. Rewriting Modulo SMT and Open System Analysis

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar

    2014-01-01

    This paper proposes rewriting modulo SMT, a new technique that combines the power of SMT solving, rewriting modulo theories, and model checking. Rewriting modulo SMT is ideally suited to model and analyze infinite-state open systems, i.e., systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism, which is proper to the system, and external non-determinism, which is due to the environment. In a reflective formalism, such as rewriting logic, rewriting modulo SMT can be reduced to standard rewriting. Hence, rewriting modulo SMT naturally extends rewriting-based reachability analysis techniques, which are available for closed systems, to open systems. The proposed technique is illustrated with the formal analysis of: (i) a real-time system that is beyond the scope of timed-automata methods and (ii) automatic detection of reachability violations in a synchronous language developed to support autonomous spacecraft operations.

  12. Correlation between spin structure oscillations and domain wall velocities

    PubMed Central

    Bisig, André; Stärk, Martin; Mawass, Mohamad-Assaad; Moutafis, Christoforos; Rhensius, Jan; Heidler, Jakoba; Büttner, Felix; Noske, Matthias; Weigand, Markus; Eisebitt, Stefan; Tyliszczak, Tolek; Van Waeyenberge, Bartel; Stoll, Hermann; Schütz, Gisela; Kläui, Mathias

    2013-01-01

    Magnetic sensing and logic devices based on the motion of magnetic domain walls rely on the precise and deterministic control of the position and the velocity of individual magnetic domain walls in curved nanowires. Varying domain wall velocities have been predicted to result from intrinsic effects such as oscillating domain wall spin structure transformations and extrinsic pinning due to imperfections. Here we use direct dynamic imaging of the nanoscale spin structure that allows us for the first time to directly check these predictions. We find a new regime of oscillating domain wall motion even below the Walker breakdown correlated with periodic spin structure changes. We show that the extrinsic pinning from imperfections in the nanowire only affects slow domain walls and we identify the magnetostatic energy, which scales with the domain wall velocity, as the energy reservoir for the domain wall to overcome the local pinning potential landscape. PMID:23978905

  13. Antiferromagnetic CuMnAs multi-level memory cell with microelectronic compatibility

    NASA Astrophysics Data System (ADS)

    Olejník, K.; Schuler, V.; Marti, X.; Novák, V.; Kašpar, Z.; Wadley, P.; Campion, R. P.; Edmonds, K. W.; Gallagher, B. L.; Garces, J.; Baumgartner, M.; Gambardella, P.; Jungwirth, T.

    2017-05-01

    Antiferromagnets offer a unique combination of properties including the radiation and magnetic field hardness, the absence of stray magnetic fields, and the spin-dynamics frequency scale in terahertz. Recent experiments have demonstrated that relativistic spin-orbit torques can provide the means for an efficient electric control of antiferromagnetic moments. Here we show that elementary-shape memory cells fabricated from a single-layer antiferromagnet CuMnAs deposited on a III-V or Si substrate have deterministic multi-level switching characteristics. They allow for counting and recording thousands of input pulses and responding to pulses of lengths downscaled to hundreds of picoseconds. To demonstrate the compatibility with common microelectronic circuitry, we implemented the antiferromagnetic bit cell in a standard printed circuit board managed and powered at ambient conditions by a computer via a USB interface. Our results open a path towards specialized embedded memory-logic applications and ultra-fast components based on antiferromagnets.

  14. Processing multiphoton states through operation on a single photon: Methods and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin Qing; He Bing; Bergou, Janos A.

    2009-10-15

    Multiphoton states are widely applied in quantum information technology. By the methods presented in this paper, the structure of a multiphoton state in the form of multiple single-photon qubit products can be mapped to a single-photon qudit, which could also be in a separable product with other photons. This makes possible the manipulation of such multiphoton states by processing single-photon states. The optical realization of unknown qubit discrimination [B. He, J. A. Bergou, and Y.-H. Ren, Phys. Rev. A 76, 032301 (2007)] is simplified with the transformation methods. Another application is the construction of quantum logic gates, where the inversemore » transformations back to the input state spaces are also necessary. We especially show that the modified setups to implement the transformations can realize the deterministic multicontrol gates (including Toffoli gate) operating directly on the products of single-photon qubits.« less

  15. On the structure of the set of coincidence points

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arutyunov, A V; Gel'man, B D

    2015-03-31

    We consider the set of coincidence points for two maps between metric spaces. Cardinality, metric and topological properties of the coincidence set are studied. We obtain conditions which guarantee that this set (a) consists of at least two points; (b) consists of at least n points; (c) contains a countable subset; (d) is uncountable. The results are applied to study the structure of the double point set and the fixed point set for multivalued contractions. Bibliography: 12 titles.

  16. Resolving the issue of branched Hamiltonian in modified Lanczos-Lovelock gravity

    NASA Astrophysics Data System (ADS)

    Ruz, Soumendranath; Mandal, Ranajit; Debnath, Subhra; Sanyal, Abhik Kumar

    2016-07-01

    The Hamiltonian constraint H_c = N{H} = 0, defines a diffeomorphic structure on spatial manifolds by the lapse function N in general theory of relativity. However, it is not manifest in Lanczos-Lovelock gravity, since the expression for velocity in terms of the momentum is multivalued. Thus the Hamiltonian is a branch function of momentum. Here we propose an extended theory of Lanczos-Lovelock gravity to construct a unique Hamiltonian in its minisuperspace version, which results in manifest diffeomorphic invariance and canonical quantization.

  17. Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Kilsby, Chris G.; Lombardo, Federico

    2018-01-01

    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as 'deterministic components' or 'trends' even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures.

  18. A plausible neural circuit for decision making and its formation based on reinforcement learning.

    PubMed

    Wei, Hui; Dai, Dawei; Bu, Yijie

    2017-06-01

    A human's, or lower insects', behavior is dominated by its nervous system. Each stable behavior has its own inner steps and control rules, and is regulated by a neural circuit. Understanding how the brain influences perception, thought, and behavior is a central mandate of neuroscience. The phototactic flight of insects is a widely observed deterministic behavior. Since its movement is not stochastic, the behavior should be dominated by a neural circuit. Based on the basic firing characteristics of biological neurons and the neural circuit's constitution, we designed a plausible neural circuit for this phototactic behavior from logic perspective. The circuit's output layer, which generates a stable spike firing rate to encode flight commands, controls the insect's angular velocity when flying. The firing pattern and connection type of excitatory and inhibitory neurons are considered in this computational model. We simulated the circuit's information processing using a distributed PC array, and used the real-time average firing rate of output neuron clusters to drive a flying behavior simulation. In this paper, we also explored how a correct neural decision circuit is generated from network flow view through a bee's behavior experiment based on the reward and punishment feedback mechanism. The significance of this study: firstly, we designed a neural circuit to achieve the behavioral logic rules by strictly following the electrophysiological characteristics of biological neurons and anatomical facts. Secondly, our circuit's generality permits the design and implementation of behavioral logic rules based on the most general information processing and activity mode of biological neurons. Thirdly, through computer simulation, we achieved new understanding about the cooperative condition upon which multi-neurons achieve some behavioral control. Fourthly, this study aims in understanding the information encoding mechanism and how neural circuits achieve behavior control. Finally, this study also helps establish a transitional bridge between the microscopic activity of the nervous system and macroscopic animal behavior.

  19. Modal and polarization qubits in Ti:LiNbO3 photonic circuits for a universal quantum logic gate.

    PubMed

    Saleh, Mohammed F; Di Giuseppe, Giovanni; Saleh, Bahaa E A; Teich, Malvin Carl

    2010-09-13

    Lithium niobate photonic circuits have the salutary property of permitting the generation, transmission, and processing of photons to be accommodated on a single chip. Compact photonic circuits such as these, with multiple components integrated on a single chip, are crucial for efficiently implementing quantum information processing schemes.We present a set of basic transformations that are useful for manipulating modal qubits in Ti:LiNbO(3) photonic quantum circuits. These include the mode analyzer, a device that separates the even and odd components of a state into two separate spatial paths; the mode rotator, which rotates the state by an angle in mode space; and modal Pauli spin operators that effect related operations. We also describe the design of a deterministic, two-qubit, single-photon, CNOT gate, a key element in certain sets of universal quantum logic gates. It is implemented as a Ti:LiNbO(3) photonic quantum circuit in which the polarization and mode number of a single photon serve as the control and target qubits, respectively. It is shown that the effects of dispersion in the CNOT circuit can be mitigated by augmenting it with an additional path. The performance of all of these components are confirmed by numerical simulations. The implementation of these transformations relies on selective and controllable power coupling among single- and two-mode waveguides, as well as the polarization sensitivity of the Pockels coefficients in LiNbO(3).

  20. Supervisory Control of Discrete Event Systems Modeled by Mealy Automata with Nondeterministic Output Functions

    NASA Astrophysics Data System (ADS)

    Ushio, Toshimitsu; Takai, Shigemasa

    Supervisory control is a general framework of logical control of discrete event systems. A supervisor assigns a set of control-disabled controllable events based on observed events so that the controlled discrete event system generates specified languages. In conventional supervisory control, it is assumed that observed events are determined by internal events deterministically. But, this assumption does not hold in a discrete event system with sensor errors and a mobile system, where each observed event depends on not only an internal event but also a state just before the occurrence of the internal event. In this paper, we model such a discrete event system by a Mealy automaton with a nondeterministic output function. We introduce two kinds of supervisors: one assigns each control action based on a permissive policy and the other based on an anti-permissive one. We show necessary and sufficient conditions for the existence of each supervisor. Moreover, we discuss the relationship between the supervisors in the case that the output function is determinisitic.

  1. RNA secondary structure prediction using soft computing.

    PubMed

    Ray, Shubhra Sankar; Pal, Sankar K

    2013-01-01

    Prediction of RNA structure is invaluable in creating new drugs and understanding genetic diseases. Several deterministic algorithms and soft computing-based techniques have been developed for more than a decade to determine the structure from a known RNA sequence. Soft computing gained importance with the need to get approximate solutions for RNA sequences by considering the issues related with kinetic effects, cotranscriptional folding, and estimation of certain energy parameters. A brief description of some of the soft computing-based techniques, developed for RNA secondary structure prediction, is presented along with their relevance. The basic concepts of RNA and its different structural elements like helix, bulge, hairpin loop, internal loop, and multiloop are described. These are followed by different methodologies, employing genetic algorithms, artificial neural networks, and fuzzy logic. The role of various metaheuristics, like simulated annealing, particle swarm optimization, ant colony optimization, and tabu search is also discussed. A relative comparison among different techniques, in predicting 12 known RNA secondary structures, is presented, as an example. Future challenging issues are then mentioned.

  2. Intracellular integration of synthetic nanostructures with viable cells for controlled biochemical manipulation

    NASA Astrophysics Data System (ADS)

    McKnight, Timothy E.; Melechko, Anatoli V.; Griffin, Guy D.; Guillorn, Michael A.; Merkulov, Vladimir I.; Serna, Francisco; Hensley, Dale K.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.

    2003-05-01

    We demonstrate the integration of vertically aligned carbon nanofibre (VACNF) elements with the intracellular domains of viable cells for controlled biochemical manipulation. Deterministically synthesized VACNFs were modified with either adsorbed or covalently-linked plasmid DNA and were subsequently inserted into cells. Post insertion viability of the cells was demonstrated by continued proliferation of the interfaced cells and long-term (> 22 day) expression of the introduced plasmid. Adsorbed plasmids were typically desorbed in the intracellular domain and segregated to progeny cells. Covalently bound plasmids remained tethered to nanofibres and were expressed in interfaced cells but were not partitioned into progeny, and gene expression ceased when the nanofibre was no longer retained. This provides a method for achieving a genetic modification that is non-inheritable and whose extent in time can be directly and precisely controlled. These results demonstrate the potential of VACNF arrays as an intracellular interface for monitoring and controlling subcellular and molecular phenomena within viable cells for applications including biosensors, in vivo diagnostics, and in vivo logic devices.

  3. Scalable quantum computing based on stationary spin qubits in coupled quantum dots inside double-sided optical microcavities

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Deng, Fu-Guo

    2014-12-01

    Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.

  4. Scalable quantum computing based on stationary spin qubits in coupled quantum dots inside double-sided optical microcavities.

    PubMed

    Wei, Hai-Rui; Deng, Fu-Guo

    2014-12-18

    Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.

  5. Coherent coupling between a quantum dot and a donor in silicon

    DOE PAGES

    Harvey-Collard, Patrick; Jacobson, N. Tobias; Rudolph, Martin; ...

    2017-10-18

    Individual donors in silicon chips are used as quantum bits with extremely low error rates. However, physical realizations have been limited to one donor because their atomic size causes fabrication challenges. Quantum dot qubits, in contrast, are highly adjustable using electrical gate voltages. This adjustability could be leveraged to deterministically couple donors to quantum dots in arrays of qubits. In this work, we demonstrate the coherent interaction of a 31P donor electron with the electron of a metal-oxide-semiconductor quantum dot. We form a logical qubit encoded in the spin singlet and triplet states of the two-electron system. We show thatmore » the donor nuclear spin drives coherent rotations between the electronic qubit states through the contact hyperfine interaction. This provides every key element for compact two-electron spin qubits requiring only a single dot and no additional magnetic field gradients, as well as a means to interact with the nuclear spin qubit.« less

  6. Chirality of nanophotonic waveguide with embedded quantum emitter for unidirectional spin transfer

    NASA Astrophysics Data System (ADS)

    Coles, R. J.; Price, D. M.; Dixon, J. E.; Royall, B.; Clarke, E.; Kok, P.; Skolnick, M. S.; Fox, A. M.; Makhonin, M. N.

    2016-03-01

    Scalable quantum technologies may be achieved by faithful conversion between matter qubits and photonic qubits in integrated circuit geometries. Within this context, quantum dots possess well-defined spin states (matter qubits), which couple efficiently to photons. By embedding them in nanophotonic waveguides, they provide a promising platform for quantum technology implementations. In this paper, we demonstrate that the naturally occurring electromagnetic field chirality that arises in nanobeam waveguides leads to unidirectional photon emission from quantum dot spin states, with resultant in-plane transfer of matter-qubit information. The chiral behaviour occurs despite the non-chiral geometry and material of the waveguides. Using dot registration techniques, we achieve a quantum emitter deterministically positioned at a chiral point and realize spin-path conversion by design. We further show that the chiral phenomena are much more tolerant to dot position than in standard photonic crystal waveguides, exhibit spin-path readout up to 95+/-5% and have potential to serve as the basis of spin-logic and network implementations.

  7. Chirality of nanophotonic waveguide with embedded quantum emitter for unidirectional spin transfer

    PubMed Central

    Coles, R. J.; Price, D. M.; Dixon, J. E.; Royall, B.; Clarke, E.; Kok, P.; Skolnick, M. S.; Fox, A. M.; Makhonin, M. N.

    2016-01-01

    Scalable quantum technologies may be achieved by faithful conversion between matter qubits and photonic qubits in integrated circuit geometries. Within this context, quantum dots possess well-defined spin states (matter qubits), which couple efficiently to photons. By embedding them in nanophotonic waveguides, they provide a promising platform for quantum technology implementations. In this paper, we demonstrate that the naturally occurring electromagnetic field chirality that arises in nanobeam waveguides leads to unidirectional photon emission from quantum dot spin states, with resultant in-plane transfer of matter-qubit information. The chiral behaviour occurs despite the non-chiral geometry and material of the waveguides. Using dot registration techniques, we achieve a quantum emitter deterministically positioned at a chiral point and realize spin-path conversion by design. We further show that the chiral phenomena are much more tolerant to dot position than in standard photonic crystal waveguides, exhibit spin-path readout up to 95±5% and have potential to serve as the basis of spin-logic and network implementations. PMID:27029961

  8. Chirality of nanophotonic waveguide with embedded quantum emitter for unidirectional spin transfer.

    PubMed

    Coles, R J; Price, D M; Dixon, J E; Royall, B; Clarke, E; Kok, P; Skolnick, M S; Fox, A M; Makhonin, M N

    2016-03-31

    Scalable quantum technologies may be achieved by faithful conversion between matter qubits and photonic qubits in integrated circuit geometries. Within this context, quantum dots possess well-defined spin states (matter qubits), which couple efficiently to photons. By embedding them in nanophotonic waveguides, they provide a promising platform for quantum technology implementations. In this paper, we demonstrate that the naturally occurring electromagnetic field chirality that arises in nanobeam waveguides leads to unidirectional photon emission from quantum dot spin states, with resultant in-plane transfer of matter-qubit information. The chiral behaviour occurs despite the non-chiral geometry and material of the waveguides. Using dot registration techniques, we achieve a quantum emitter deterministically positioned at a chiral point and realize spin-path conversion by design. We further show that the chiral phenomena are much more tolerant to dot position than in standard photonic crystal waveguides, exhibit spin-path readout up to 95±5% and have potential to serve as the basis of spin-logic and network implementations.

  9. Symbolic inversion of control relationships in model-based expert systems

    NASA Technical Reports Server (NTRS)

    Thomas, Stan

    1988-01-01

    Symbolic inversion is examined from several perspectives. First, a number of symbolic algebra and mathematical tool packages were studied in order to evaluate their capabilities and methods, specifically with respect to symbolic inversion. Second, the KATE system (without hardware interface) was ported to a Zenith Z-248 microcomputer running Golden Common Lisp. The interesting thing about the port is that it allows the user to have measurements vary and components fail in a non-deterministic manner based upon random value from probability distributions. Third, INVERT was studied as currently implemented in KATE, its operation documented, some of its weaknesses identified, and corrections made to it. The corrections and enhancements are primarily in the way that logical conditions involving AND's and OR's and inequalities are processed. In addition, the capability to handle equalities was also added. Suggestions were also made regarding the handling of ranges in INVERT. Last, other approaches to the inversion process were studied and recommendations were made as to how future versions of KATE should perform symbolic inversion.

  10. Evolving autonomous learning in cognitive networks.

    PubMed

    Sheneman, Leigh; Hintze, Arend

    2017-12-01

    There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. Prior to this work MB could only adapt from one generation to the other, so we introduce feedback gates which augment their ability to learn during their lifetime. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning and could be another step towards autonomously learning machines.

  11. Atomic Source of Single Photons in the Telecom Band

    NASA Astrophysics Data System (ADS)

    Dibos, A. M.; Raha, M.; Phenicie, C. M.; Thompson, J. D.

    2018-06-01

    Single atoms and atomlike defects in solids are ideal quantum light sources and memories for quantum networks. However, most atomic transitions are in the ultraviolet-visible portion of the electromagnetic spectrum, where propagation losses in optical fibers are prohibitively large. Here, we observe for the first time the emission of single photons from a single Er3 + ion in a solid-state host, whose optical transition at 1.5 μ m is in the telecom band, allowing for low-loss propagation in optical fiber. This is enabled by integrating Er3 + ions with silicon nanophotonic structures, which results in an enhancement of the photon emission rate by a factor of more than 650. Dozens of distinct ions can be addressed in a single device, and the splitting of the lines in a magnetic field confirms that the optical transitions are coupled to the electronic spin of the Er3 + ions. These results are a significant step towards long-distance quantum networks and deterministic quantum logic for photons based on a scalable silicon nanophotonics architecture.

  12. Efficient fault diagnosis of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, H.; Danai, K.; Lewicki, D. G.

    1993-01-01

    Application of a diagnostic system to a helicopter gearbox is presented. The diagnostic system is a nonparametric pattern classifier that uses a multi-valued influence matrix (MVIM) as its diagnostic model and benefits from a fast learning algorithm that enables it to estimate its diagnostic model from a small number of measurement-fault data. To test this diagnostic system, vibration measurements were collected from a helicopter gearbox test stand during accelerated fatigue tests and at various fault instances. The diagnostic results indicate that the MVIM system can accurately detect and diagnose various gearbox faults so long as they are included in training.

  13. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  14. Shock formation in the dispersionless Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Grava, T.; Klein, C.; Eggers, J.

    2016-04-01

    The dispersionless Kadomtsev-Petviashvili (dKP) equation {{≤ft({{u}t}+u{{u}x}\\right)}x}={{u}yy} is one of the simplest nonlinear wave equations describing two-dimensional shocks. To solve the dKP equation numerically we use a coordinate transformation inspired by the method of characteristics for the one-dimensional Hopf equation {{u}t}+u{{u}x}=0 . We show numerically that the solutions to the transformed equation stays regular for longer times than the solution of the dKP equation. This permits us to extend the dKP solution as the graph of a multivalued function beyond the critical time when the gradients blow up. This overturned solution is multivalued in a lip shape region in the (x, y) plane, where the solution of the dKP equation exists in a weak sense only, and a shock front develops. A local expansion reveals the universal scaling structure of the shock, which after a suitable change of coordinates corresponds to a generic cusp catastrophe. We provide a heuristic derivation of the shock front position near the critical point for the solution of the dKP equation, and study the solution of the dKP equation when a small amount of dissipation is added. Using multiple-scale analysis, we show that in the limit of small dissipation and near the critical point of the dKP solution, the solution of the dissipative dKP equation converges to a Pearcey integral. We test and illustrate our results by detailed comparisons with numerical simulations of both the regularized equation, the dKP equation, and the asymptotic description given in terms of the Pearcey integral.

  15. On the formulation of the aerodynamic characteristics in aircraft dynamics

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Schiff, L. B.

    1976-01-01

    The theory of functionals is used to reformulate the notions of aerodynamic indicial functions and superposition. Integral forms for the aerodynamic response to arbitrary motions are derived that are free of dependence on a linearity assumption. Simplifications of the integral forms lead to practicable nonlinear generalizations of the linear superpositions and stability derivative formulations. Applied to arbitrary nonplanar motions, the generalization yields a form for the aerodynamic response that can be compounded of the contributions from a limited number of well-defined characteristic motions, in principle reproducible in the wind tunnel. Further generalizations that would enable the consideration of random fluctuations and multivalued aerodynamic responses are indicated.

  16. A Class of time-fractional hemivariational inequalities with application to frictional contact problem

    NASA Astrophysics Data System (ADS)

    Zeng, Shengda; Migórski, Stanisław

    2018-03-01

    In this paper a class of elliptic hemivariational inequalities involving the time-fractional order integral operator is investigated. Exploiting the Rothe method and using the surjectivity of multivalued pseudomonotone operators, a result on existence of solution to the problem is established. Then, this abstract result is applied to provide a theorem on the weak solvability of a fractional viscoelastic contact problem. The process is quasistatic and the constitutive relation is modeled with the fractional Kelvin-Voigt law. The friction and contact conditions are described by the Clarke generalized gradient of nonconvex and nonsmooth functionals. The variational formulation of this problem leads to a fractional hemivariational inequality.

  17. A class of fractional differential hemivariational inequalities with application to contact problem

    NASA Astrophysics Data System (ADS)

    Zeng, Shengda; Liu, Zhenhai; Migorski, Stanislaw

    2018-04-01

    In this paper, we study a class of generalized differential hemivariational inequalities of parabolic type involving the time fractional order derivative operator in Banach spaces. We use the Rothe method combined with surjectivity of multivalued pseudomonotone operators and properties of the Clarke generalized gradient to establish existence of solution to the abstract inequality. As an illustrative application, a frictional quasistatic contact problem for viscoelastic materials with adhesion is investigated, in which the friction and contact conditions are described by the Clarke generalized gradient of nonconvex and nonsmooth functionals, and the constitutive relation is modeled by the fractional Kelvin-Voigt law.

  18. Relaxation in control systems of subdifferential type

    NASA Astrophysics Data System (ADS)

    Tolstonogov, A. A.

    2006-02-01

    In a separable Hilbert space we consider a control system with evolution operators that are subdifferentials of a proper convex lower semicontinuous function depending on time. The constraint on the control is given by a multivalued function with non-convex values that is lower semicontinuous with respect to the variable states. Along with the original system we consider the system in which the constraint on the control is the upper semicontinuous convex-valued regularization of the original constraint. We study relations between the solution sets of these systems. As an application we consider a control variational inequality. We give an example of a control system of parabolic type with an obstacle.

  19. Experimental testing of the noise-canceling processor.

    PubMed

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  20. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  1. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.

    PubMed

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde

    2015-08-28

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.

  3. Fingerprint identification: advances since the 2009 National Research Council report

    PubMed Central

    Champod, Christophe

    2015-01-01

    This paper will discuss the major developments in the area of fingerprint identification that followed the publication of the National Research Council (NRC, of the US National Academies of Sciences) report in 2009 entitled: Strengthening Forensic Science in the United States: A Path Forward. The report portrayed an image of a field of expertise used for decades without the necessary scientific research-based underpinning. The advances since the report and the needs in selected areas of fingerprinting will be detailed. It includes the measurement of the accuracy, reliability, repeatability and reproducibility of the conclusions offered by fingerprint experts. The paper will also pay attention to the development of statistical models allowing assessment of fingerprint comparisons. As a corollary of these developments, the next challenge is to reconcile a traditional practice dominated by deterministic conclusions with the probabilistic logic of any statistical model. There is a call for greater candour and fingerprint experts will need to communicate differently on the strengths and limitations of their findings. Their testimony will have to go beyond the blunt assertion of the uniqueness of fingerprints or the opinion delivered ispe dixit. PMID:26101284

  4. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks

    PubMed Central

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde

    2015-01-01

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380

  5. Constructor theory of information

    PubMed Central

    Deutsch, David; Marletto, Chiara

    2015-01-01

    We propose a theory of information expressed solely in terms of which transformations of physical systems are possible and which are impossible—i.e. in constructor-theoretic terms. It includes conjectured, exact laws of physics expressing the regularities that allow information to be physically instantiated. Although these laws are directly about information, independently of the details of particular physical instantiations, information is not regarded as an a priori mathematical or logical concept, but as something whose nature and properties are determined by the laws of physics alone. This theory solves a problem at the foundations of existing information theory, namely that information and distinguishability are each defined in terms of the other. It also explains the relationship between classical and quantum information, and reveals the single, constructor-theoretic property underlying the most distinctive phenomena associated with the latter, including the lack of in-principle distinguishability of some states, the impossibility of cloning, the existence of pairs of variables that cannot simultaneously have sharp values, the fact that measurement processes can be both deterministic and unpredictable, the irreducible perturbation caused by measurement, and locally inaccessible information (as in entangled systems). PMID:25663803

  6. The Evolution of Software and Its Impact on Complex System Design in Robotic Spacecraft Embedded Systems

    NASA Technical Reports Server (NTRS)

    Butler, Roy

    2013-01-01

    The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.

  7. Spatially and time-resolved magnetization dynamics driven by spin-orbit torques

    NASA Astrophysics Data System (ADS)

    Baumgartner, Manuel; Garello, Kevin; Mendil, Johannes; Avci, Can Onur; Grimaldi, Eva; Murer, Christoph; Feng, Junxiao; Gabureac, Mihai; Stamm, Christian; Acremann, Yves; Finizio, Simone; Wintz, Sebastian; Raabe, Jörg; Gambardella, Pietro

    2017-10-01

    Current-induced spin-orbit torques are one of the most effective ways to manipulate the magnetization in spintronic devices, and hold promise for fast switching applications in non-volatile memory and logic units. Here, we report the direct observation of spin-orbit-torque-driven magnetization dynamics in Pt/Co/AlOx dots during current pulse injection. Time-resolved X-ray images with 25 nm spatial and 100 ps temporal resolution reveal that switching is achieved within the duration of a subnanosecond current pulse by the fast nucleation of an inverted domain at the edge of the dot and propagation of a tilted domain wall across the dot. The nucleation point is deterministic and alternates between the four dot quadrants depending on the sign of the magnetization, current and external field. Our measurements reveal how the magnetic symmetry is broken by the concerted action of the damping-like and field-like spin-orbit torques and the Dzyaloshinskii-Moriya interaction, and show that reproducible switching events can be obtained for over 1012 reversal cycles.

  8. Fast BPM data distribution for global orbit feedback using commercial gigabit ethernet technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulsart, R.; Cerniglia, P.; Michnoff, R.

    2011-03-28

    In order to correct beam perturbations in RHIC around 10Hz, a new fast data distribution network was required to deliver BPM position data at rates several orders of magnitude above the capability of the existing system. The urgency of the project limited the amount of custom hardware that could be developed, which dictated the use of as much commercially available equipment as possible. The selected architecture uses a custom hardware interface to the existing RHIC BPM electronics together with commercially available Gigabit Ethernet switches to distribute position data to devices located around the collider ring. Using the minimum Ethernet packetmore » size and a field programmable gate array (FPGA) based state machine logic instead of a software based driver, real-time and deterministic data delivery is possible using Ethernet. The method of adapting this protocol for low latency data delivery, bench testing of Ethernet hardware, and the logic to construct Ethernet packets using FPGA hardware will be discussed. A robust communications system using almost all commercial off-the-shelf equipment was developed in under a year which enabled retrofitting of the existing RHIC BPM system to provide 10 KHz data delivery for a global orbit feedback scheme using 72 BPMs. Total latencies from data acquisition at the BPMs to delivery at the controller modules, including very long transmission distances, were kept under 100 {micro}s, which provide very little phase error in correcting the 10 Hz oscillations. Leveraging off of the speed of Gigabit Ethernet and wide availability of Ethernet products enabled this solution to be fully implemented in a much shorter time and at lower cost than if a similar network was developed using a proprietary method.« less

  9. Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

    NASA Astrophysics Data System (ADS)

    Traversa, Fabio L.; Di Ventra, Massimiliano

    2017-02-01

    We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.

  10. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  11. Induced seismicity risk assessment for the 2006 Basel, Switzerland, Enhanced Geothermal System (EGS) project: Role of parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Mignan, Arnaud; Landtwing, Delano; Mena, Banu; Wiemer, Stefan

    2013-04-01

    A project to exploit the geothermal potential of the crystalline rocks below the city of Basel, Switzerland, was abandoned in recent years due to unacceptable risk associated to increased seismic activity during and following hydraulic stimulation. The largest induced earthquake (Mw = 3.2, 8 December 2006) was widely felt by the local population and provoked slight non-structural damage to buildings. Here we present a probabilistic risk assessment analysis for the 2006 Basel EGS project, including uncertainty linked to the following parameters: induced seismicity forecast model, maximum magnitude, intensity prediction equation, site amplification or not, vulnerability index and cost function. Uncertainty is implemented using a logic tree composed of a total of 324 branches. Exposure is defined from the Basel area building stock of Baisch et al. (2009) (SERIANEX study). We first generate deterministic loss curves, defined as the insured value loss (IVL) as a function of earthquake magnitude. We calibrate the vulnerability curves for low EMS-98 intensities (using the input parameters fixed in the SERIANEX study) such that we match the real loss value, which has been estimated to 3 million CHF (lower than the paid value) for the Mw = 3.2 event. Coupling the deterministic loss curves with seismic hazard curves using the short-term earthquake risk (STEER) method, we obtain site-specific probabilistic loss curves (PLC, i.e., probability of exceeding a given IVL) for the 79 settlements considered. We then integrate over the different PLCs to calculate the most probable IVL. Based on the proposed logic tree, we find considerable variations in the most probable IVL, with lower values for the 6-day injection period than for the first 6 days of the post-injection period. This difference is due to a b-value significantly lower in the second period than in the first one, yielding a higher likelihood of larger earthquakes in the post-injection phase. Based on tornado diagrams, we show that the variability in the most probable IVL is mostly due to the choice of the vulnerability index, followed by the choice of including or not site amplification. The choice of the cost function comes in third place. Based on these results, we finally provide guidelines for decision-making. To the best of our knowledge, this study is the first one to consider uncertainties at the hazard and risk level in a systematic way in the scope of induced seismicity regimes. The proposed method is transferable to other EGS projects as well as to earthquake sequences triggered by wastewater disposal, carbon capture and sequestration.

  12. Saturation: An efficient iteration strategy for symbolic state-space generation

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Luettgen, Gerald; Siminiceanu, Radu; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents a novel algorithm for generating state spaces of asynchronous systems using Multi-valued Decision Diagrams. In contrast to related work, the next-state function of a system is not encoded as a single Boolean function, but as cross-products of integer functions. This permits the application of various iteration strategies to build a system's state space. In particular, this paper introduces a new elegant strategy, called saturation, and implements it in the tool SMART. On top of usually performing several orders of magnitude faster than existing BDD-based state-space generators, the algorithm's required peak memory is often close to the nal memory needed for storing the overall state spaces.

  13. Phase portraits analysis of a barothropic system: The initial value problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuetche, Victor Kamgang, E-mail: vkuetche@yahoo.fr; Department of Physics, Faculty of Science, University of Yaounde I, P.O. Box 812, Yaounde; The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste

    2014-05-15

    In this paper, we investigate the phase portraits features of a barothropic relaxing medium under pressure perturbations. In the starting point, we show within a third-order of accuracy that the previous system is modeled by a “dissipative” cubic nonlinear evolution equation. Paying particular attention to high-frequency perturbations of the system, we solve the initial value problem of the system both analytically and numerically while unveiling the existence of localized multivalued waveguide channels. Accordingly, we find that the “dissipative” term with a “dissipative” parameter less than some limit value does not destroy the ambiguous solutions. We address some physical implications ofmore » the results obtained previously.« less

  14. An alternative extragradient projection method for quasi-equilibrium problems.

    PubMed

    Chen, Haibin; Wang, Yiju; Xu, Yi

    2018-01-01

    For the quasi-equilibrium problem where the players' costs and their strategies both depend on the rival's decisions, an alternative extragradient projection method for solving it is designed. Different from the classical extragradient projection method whose generated sequence has the contraction property with respect to the solution set, the newly designed method possesses an expansion property with respect to a given initial point. The global convergence of the method is established under the assumptions of pseudomonotonicity of the equilibrium function and of continuity of the underlying multi-valued mapping. Furthermore, we show that the generated sequence converges to the nearest point in the solution set to the initial point. Numerical experiments show the efficiency of the method.

  15. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  16. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  17. Development of a software tool using deterministic logic for the optimization of cochlear implant processor programming.

    PubMed

    Govaerts, Paul J; Vaerenberg, Bart; De Ceulaer, Geert; Daemers, Kristin; De Beukelaer, Carina; Schauwers, Karen

    2010-08-01

    An intelligent agent, Fitting to Outcomes eXpert, was developed to optimize and automate Cochlear implant (CI) programming. The current article describes the rationale, development, and features of this tool. Cochlear implant fitting is a time-consuming procedure to define the value of a subset of the available electric parameters based primarily on behavioral responses. It is comfort-driven with high intraindividual and interindividual variability both with respect to the patient and to the clinician. Its validity in terms of process control can be questioned. Good clinical practice would require an outcome-driven approach. An intelligent agent may help solve the complexity of addressing more electric parameters based on a range of outcome measures. A software application was developed that consists of deterministic rules that analyze the map settings in the processor together with psychoacoustic test results (audiogram, A(section sign)E phoneme discrimination, A(section sign)E loudness scaling, speech audiogram) obtained with that map. The rules were based on the daily clinical practice and the expertise of the CI programmers. The data transfer to and from this agent is either manual or through seamless digital communication with the CI fitting database and the psychoacoustic test suite. It recommends and executes modifications to the map settings to improve the outcome. Fitting to Outcomes eXpert is an operational intelligent agent, the principles of which are described. Its development and modes of operation are outlined, and a case example is given. Fitting to Outcomes eXpert is in use for more than a year now and seems to be capable to improve the measured outcome. It is argued that this novel tool allows a systematic approach focusing on outcome, reducing the fitting time, and improving the quality of fitting. It introduces principles of artificial intelligence in the process of CI fitting.

  18. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  19. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  20. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  1. System and method for constructing filters for detecting signals whose frequency content varies with time

    DOEpatents

    Qian, S.; Dunham, M.E.

    1996-11-12

    A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.

  2. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  3. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  4. Well-posedness for a class of doubly nonlinear stochastic PDEs of divergence type

    NASA Astrophysics Data System (ADS)

    Scarpa, Luca

    2017-08-01

    We prove well-posedness for doubly nonlinear parabolic stochastic partial differential equations of the form dXt - div γ (∇Xt) dt + β (Xt) dt ∋ B (t ,Xt) dWt, where γ and β are the two nonlinearities, assumed to be multivalued maximal monotone operators everywhere defined on Rd and R respectively, and W is a cylindrical Wiener process. Using variational techniques, suitable uniform estimates (both pathwise and in expectation) and some compactness results, well-posedness is proved under the classical Leray-Lions conditions on γ and with no restrictive smoothness or growth assumptions on β. The operator B is assumed to be Hilbert-Schmidt and to satisfy some classical Lipschitz conditions in the second variable.

  5. Full thermomechanical coupling in modelling of micropolar thermoelasticity

    NASA Astrophysics Data System (ADS)

    Murashkin, E. V.; Radayev, Y. N.

    2018-04-01

    The present paper is devoted to plane harmonic waves of displacements and microrotations propagating in fully coupled thermoelastic continua. The analysis is carried out in the framework of linear conventional thermoelastic micropolar continuum model. The reduced energy balance equation and the special form of the Helmholtz free energy are discussed. The constitutive constants providing fully coupling of equations of motion and heat conduction are considered. The dispersion equation is derived and analysed in the form bi-cubic and bi-quadratic polynoms product. The equation are analyzed by the computer algebra system Mathematica. Algebraic forms expressed by complex multivalued square and cubic radicals are obtained for wavenumbers of transverse and longitudinal waves. The exact forms of wavenumbers of a plane harmonic coupled thermoelastic waves are computed.

  6. Defect-phase-dynamics approach to statistical domain-growth problem of clock models

    NASA Technical Reports Server (NTRS)

    Kawasaki, K.

    1985-01-01

    The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.

  7. Scanner imaging systems, aircraft

    NASA Technical Reports Server (NTRS)

    Ungar, S. G.

    1982-01-01

    The causes and effects of distortion in aircraft scanner data are reviewed and an approach to reduce distortions by modelling the effect of aircraft motion on the scanner scene is discussed. With the advent of advanced satellite borne scanner systems, the geometric and radiometric correction of aircraft scanner data has become increasingly important. Corrections are needed to reliably simulate observations obtained by such systems for purposes of evaluation. It is found that if sufficient navigational information is available, aircraft scanner coordinates may be related very precisely to planimetric ground coordinates. However, the potential for a multivalue remapping transformation (i.e., scan lines crossing each other), adds an inherent uncertainty, to any radiometric resampling scheme, which is dependent on the precise geometry of the scan and ground pattern.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  9. Gödel, Tarski, Turing and the Conundrum of Free Will

    NASA Astrophysics Data System (ADS)

    Nayakar, Chetan S. Mandayam; Srikanth, R.

    2014-07-01

    The problem of defining and locating free will (FW) in physics is studied. On the basis of logical paradoxes, we argue that FW has a metatheoretic character, like the concept of truth in Tarski's undefinability theorem. Free will exists relative to a base theory if there is freedom to deviate from the deterministic or indeterministic dynamics in the theory, with the deviations caused by parameters (representing will) in the meta-theory. By contrast, determinism and indeterminism do not require meta-theoretic considerations in their formalization, making FW a fundamentally new causal primitive. FW exists relative to the meta-theory if there is freedom for deviation, due to higher-order causes. Absolute free will, which corresponds to our intuitive introspective notion of free will, exists if this meta-theoretic hierarchy is infinite. We argue that this hierarchy corresponds to higher levels of uncomputability. In other words, at any finitely high order in the hierarchy, there are uncomputable deviations from the law at that order. Applied to the human condition, the hierarchy corresponds to deeper levels of the subconscious or unconscious mind. Possible ramifications of our model for physics, neuroscience and artificial intelligence (AI) are briefly considered.

  10. Epidemiological causality.

    PubMed

    Morabia, Alfredo

    2005-01-01

    Epidemiological methods, which combine population thinking and group comparisons, can primarily identify causes of disease in populations. There is therefore a tension between our intuitive notion of a cause, which we want to be deterministic and invariant at the individual level, and the epidemiological notion of causes, which are invariant only at the population level. Epidemiologists have given heretofore a pragmatic solution to this tension. Causal inference in epidemiology consists in checking the logical coherence of a causality statement and determining whether what has been found grossly contradicts what we think we already know: how strong is the association? Is there a dose-response relationship? Does the cause precede the effect? Is the effect biologically plausible? Etc. This approach to causal inference can be traced back to the English philosophers David Hume and John Stuart Mill. On the other hand, the mode of establishing causality, devised by Jakob Henle and Robert Koch, which has been fruitful in bacteriology, requires that in every instance the effect invariably follows the cause (e.g., inoculation of Koch bacillus and tuberculosis). This is incompatible with epidemiological causality which has to deal with probabilistic effects (e.g., smoking and lung cancer), and is therefore invariant only for the population.

  11. Novel physical constraints on implementation of computational processes

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Kolchinsky, Artemy

    Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.

  12. Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.

    PubMed

    Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing

    2011-01-01

    In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.

  13. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  14. Tsunami Hazard Assessment of the Northern Oregon Coast: A Multi-Deterministic Approach Tested at Cannon Beach, Oregon

    NASA Astrophysics Data System (ADS)

    Priest, G. R.; Goldfinger, C.; Wang, K.; Witter, R. C.; Zhang, Y.; Baptista, A.

    2008-12-01

    To update the tsunami hazard assessment method for Oregon, we (1) evaluate geologically reasonable variability of the earthquake rupture process on the Cascadia megathrust, (2) compare those scenarios to geological and geophysical evidence for plate locking, (3) specify 25 deterministic earthquake sources, and (4) use the resulting vertical coseismic deformations as initial conditions for simulation of Cascadia tsunami inundation at Cannon Beach, Oregon. Because of the Cannon Beach focus, the north-south extent of source scenarios is limited to Neah Bay, Washington to Florence, Oregon. We use the marine paleoseismic record to establish recurrence bins from the 10,000 year event record and select representative coseismic slips from these data. Assumed slips on the megathrust are 8.4 m (290 yrs of convergence), 15.2 m (525 years of convergence), 21.6 m (748 years of convergence), and 37.5 m (1298 years of convergence) which, if the sources were extended to the entire Cascadia margin, give Mw varying from approximately 8.3 to 9.3. Additional parameters explored by these scenarios characterize ruptures with a buried megathrust versus splay faulting, local versus regional slip patches, and seaward skewed versus symmetrical slip distribution. By assigning variable weights to the 25 source scenarios using a logic tree approach, we derived percentile inundation lines that express the confidence level (percentage) that a Cascadia tsunami will NOT exceed the line. Lines of 50, 70, 90, and 99 percent confidence correspond to maximum runup of 8.9, 10.5, 13.2, and 28.4 m (NAVD88). The tsunami source with highest logic tree weight (preferred scenario) involved rupture of a splay fault with 15.2 m slip that produced tsunami inundation near the 70 percent confidence line. Minimum inundation consistent with the inland extent of three Cascadia tsunami sand layers deposited east of Cannon Beach within the last 1000 years suggests a minimum of 15.2 m slip on buried megathrust ruptures. The largest tsunami run-up at the 99 percent isoline was from 37.5 m slip partitioned to a splay fault. This type of extreme event is considered to be very rare, perhaps once in 10,000 years based on offshore paleoseismic evidence, but it can produce waves rivaling the 2004 Indian Ocean tsunami. Cascadia coseismic deformation most similar to the Indian Ocean earthquake produced generally smaller tsunamis than at the Indian Ocean due mostly to the 1 km shallower water depth on the Cascadia margin. Inundation from distant tsunami sources was assessed by simulation of only two Mw 9.2 earthquakes in the Gulf of Alaska, a hypothetical worst-case developed by the Tsunami Pilot Study Working Group (2006) and a historical worst case, the 1964 Prince William Sound Earthquake; maximum runups were, respectively, 12.4 m and 7.5 m.

  15. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  16. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  17. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  18. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  19. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  20. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  1. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  2. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  3. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  4. Multicellular Computing Using Conjugation for Wiring

    PubMed Central

    Goñi-Moreno, Angel; Amos, Martyn; de la Cruz, Fernando

    2013-01-01

    Recent efforts in synthetic biology have focussed on the implementation of logical functions within living cells. One aim is to facilitate both internal “re-programming” and external control of cells, with potential applications in a wide range of domains. However, fundamental limitations on the degree to which single cells may be re-engineered have led to a growth of interest in multicellular systems, in which a “computation” is distributed over a number of different cell types, in a manner analogous to modern computer networks. Within this model, individual cell type perform specific sub-tasks, the results of which are then communicated to other cell types for further processing. The manner in which outputs are communicated is therefore of great significance to the overall success of such a scheme. Previous experiments in distributed cellular computation have used global communication schemes, such as quorum sensing (QS), to implement the “wiring” between cell types. While useful, this method lacks specificity, and limits the amount of information that may be transferred at any one time. We propose an alternative scheme, based on specific cell-cell conjugation. This mechanism allows for the direct transfer of genetic information between bacteria, via circular DNA strands known as plasmids. We design a multi-cellular population that is able to compute, in a distributed fashion, a Boolean XOR function. Through this, we describe a general scheme for distributed logic that works by mixing different strains in a single population; this constitutes an important advantage of our novel approach. Importantly, the amount of genetic information exchanged through conjugation is significantly higher than the amount possible through QS-based communication. We provide full computational modelling and simulation results, using deterministic, stochastic and spatially-explicit methods. These simulations explore the behaviour of one possible conjugation-wired cellular computing system under different conditions, and provide baseline information for future laboratory implementations. PMID:23840385

  5. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  6. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  7. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  8. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  9. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  10. Non-convex dissipation potentials in multiscale non-equilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Janečka, Adam; Pavelka, Michal

    2018-04-01

    Reformulating constitutive relation in terms of gradient dynamics (being derivative of a dissipation potential) brings additional information on stability, metastability and instability of the dynamics with respect to perturbations of the constitutive relation, called CR-stability. CR-instability is connected to the loss of convexity of the dissipation potential, which makes the Legendre-conjugate dissipation potential multivalued and causes dissipative phase transitions that are not induced by non-convexity of free energy, but by non-convexity of the dissipation potential. CR-stability of the constitutive relation with respect to perturbations is then manifested by constructing evolution equations for the perturbations in a thermodynamically sound way (CR-extension). As a result, interesting experimental observations of behavior of complex fluids under shear flow and supercritical boiling curve can be explained.

  11. Nonlinear propagation of vector extremely short pulses in a medium of symmetric and asymmetric molecules

    NASA Astrophysics Data System (ADS)

    Sazonov, S. V.; Ustinov, N. V.

    2017-02-01

    The nonlinear propagation of extremely short electromagnetic pulses in a medium of symmetric and asymmetric molecules placed in static magnetic and electric fields is theoretically studied. Asymmetric molecules differ in that they have nonzero permanent dipole moments in stationary quantum states. A system of wave equations is derived for the ordinary and extraordinary components of pulses. It is shown that this system can be reduced in some cases to a system of coupled Ostrovsky equations and to the equation intagrable by the method for an inverse scattering transformation, including the vector version of the Ostrovsky-Vakhnenko equation. Different types of solutions of this system are considered. Only solutions representing the superposition of periodic solutions are single-valued, whereas soliton and breather solutions are multivalued.

  12. Enhanced sensitivity of a passive optical cavity by an intracavity dispersive medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, David D.; Department of Physics, University of Alabama in Huntsville, Huntsville, Alabama 35899; Myneni, Krishna

    2009-07-15

    The pushing of the modes of a Fabry-Perot cavity by an intracavity rubidium cell is measured. The scale factor of the modes is increased by the anomalous dispersion and is inversely proportional to the sum of the effective group index and an additional cavity delay factor that arises from the variation of the Rb absorption over a free spectral range. This additional positive feedback further increases the effect of the anomalous dispersion and goes to zero at the lasing threshold. The mode width does not grow as fast as the scale factor as the intracavity absorption is increased resulting inmore » enhanced measurement sensitivities. For absorptions larger than the scale factor pole, the atom-cavity response is multivalued and mode splitting occurs.« less

  13. A longitudinal analysis of the relationship between fertility timing and schooling.

    PubMed

    Stange, Kevin

    2011-08-01

    This article quantifies the contribution of pre-treatment dynamic selection to the relationship between fertility timing and postsecondary attainment, after controlling for a rich set of predetermined characteristics. Eventual mothers and nonmothers are matched using their predicted birth hazard rate, which shares the desirable properties of a propensity score but in a multivalued treatment setting. I find that eventual mothers and matched nonmothers enter college at the same rate, but their educational paths diverge well before the former become pregnant. This pre-pregnancy divergence creates substantial differences in ultimate educational attainment that cannot possibly be due to the childbirth itself. Controls for predetermined characteristics and fixed effects do not address this form of dynamic selection bias. A dynamic model of the simultaneous childbirth-education sequencing decision is necessary to address it.

  14. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  15. Dynamic Event Tree advancements and control logic improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego

    The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less

  16. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  17. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  18. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  19. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  20. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  1. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  2. System and method for constructing filters for detecting signals whose frequency content varies with time

    DOEpatents

    Qian, Shie; Dunham, Mark E.

    1996-01-01

    A system and method for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos{2.pi..phi.(t)} and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {.phi.'(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series (also known as the Gabor spectrogram). The joint time-frequency transformation represents the analyzed signal energy at time t and frequency .function., P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function .phi.'(t) which best fits the multivalued function f(t), a trajectory of the joint time-frequency domain representation of x(t). Integrating .phi.'(t) along t yields .phi.(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template.

  3. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  4. Cb-LIKE - Thunderstorm forecasts up to six hours with fuzzy logic

    NASA Astrophysics Data System (ADS)

    Köhler, Martin; Tafferner, Arnold

    2016-04-01

    Thunderstorms with their accompanying effects like heavy rain, hail, or downdrafts cause delays and flight cancellations and therefore high additional cost for airlines and airport operators. A reliable thunderstorm forecast up to several hours could provide more time for decision makers in air traffic for an appropriate reaction on possible storm cells and initiation of adequate counteractions. To provide the required forecasts Cb-LIKE (Cumulonimbus-LIKElihood) has been developed at the DLR (Deutsches Zentrum für Luft- und Raumfahrt) Institute of Atmospheric Physics. The new algorithm is an automated system which designates areas with possible thunderstorm development by using model data of the COSMO-DE weather model, which is driven by the German Meteorological Service (DWD). A newly developed "Best-Member- Selection" method allows the automatic selection of that particular model run of a time-lagged COSMO- DE model ensemble, which matches best the current thunderstorm situation. Thereby the application of the best available data basis for the calculation of the thunderstorm forecasts by Cb-LIKE is ensured. Altogether there are four different modes for the selection of the best member. Four atmospheric parameters (CAPE, vertical wind velocity, radar reflectivity and cloud top temperature) of the model output are used within the algorithm. A newly developed fuzzy logic system enables the subsequent combination of the model parameters and the calculation of a thunderstorm indicator within a value range of 12 up to 88 for each grid point of the model domain for the following six hours in one hour intervals. The higher the indicator value the more the model parameters imply the development of thunderstorms. The quality of the Cb-LIKE thunderstorm forecasts was evaluated by a substantial verification using a neighborhood verification approach and multi-event contingency tables. The verification was performed for the whole summer period of 2012. On the basis of a deterministic object comparison with heavy precipitation cells observed by the radar-based thunderstorm tracking algorithm Rad-TRAM, several verification scores like BIAS, POD, FAR and CSI were calculated to identify possible advantages of the new algorithm. The presentation illustrates in detail the concept of the Cb-LIKE algorithm with regard to the fuzzy logic system and the Best-Member-Selection. Additionally some case studies and the most important results of the verification will be shown. The implementation of the forecasts into the DLR WxFUSION system, an user oriented forecasting system for air traffic, will also be included.

  5. Etude experimentale et optimisation d'un systeme hybride hydraulique pour camions a ordures et amelioration des performances par raffinement de sa logique de controle

    NASA Astrophysics Data System (ADS)

    Lacroix, Benoit

    The aim of this thesis is to demonstrate experimentally the operation of a hydraulic hybrid system specifically dedicated to the application of refuse trucks in addition to proposing solutions to improve its control strategy. The developed hybrid system recovers the vehicle's kinetic energy during braking. A variable displacement hydraulic motor then uses the energy stored in a hydraulic accumulator to assist the internal combustion engine (ICE) at suitable times. The particular aspect of this system is that assistance to the ICE can occur when it operates at idle and drives the auxiliary hydraulic equipment of the refuse truck. Essentially, the control strategy initially developed maximizes the recovery of braking energy and uses that energy to minimize the solicitation of the ICE at idle. The experimental results obtained with two prototypes tested in real operating conditions show that the hybrid system can recover a significant portion of braking energy. In addition, the results show that it is possible to reduce the load on the ICE during idle with the application of an assisting torque. However, the advantage of assisting the ICE in specific areas of the operating range is slim since the ICE's gross efficiency varies only slightly depending on conditions of operation. This is confirmed by the optimization of the control logic using deterministic dynamic programming. Indeed, by managing the pressure in the accumulator to maximize the amount of energy recovered during braking and by dosing the assistance to the ICE in an ideal fashion, the optimal control only managed to improve fuel savings by 6% in comparison to the original control. Therefore, since the efforts that would be required to emulate the ideal behavior in real time are significant for a relatively small and uncertain gain, the initial control logic is considered near optimal. Finally, this thesis proposes an improved version of the torque assisting hybrid system that could shut down the ICE when the vehicle is stopped while maintaining functional the auxiliary hydraulic equipment. An optimization of the control logic indicates that proper management of the pressure in the accumulator would allow turning off the ICE most of the time at stop and thus, would increase the fuel savings by over 40% compared to the original system. The simulation of a basic control strategy shows that such pressure management may be feasible in real time and that the potential gain in fuel savings is achievable. Keywords: hybrid system, hydraulic, control, refuse truck.

  6. SELENA - An open-source tool for seismic risk and loss assessment using a logic tree computation procedure

    NASA Astrophysics Data System (ADS)

    Molina, S.; Lang, D. H.; Lindholm, C. D.

    2010-03-01

    The era of earthquake risk and loss estimation basically began with the seminal paper on hazard by Allin Cornell in 1968. Following the 1971 San Fernando earthquake, the first studies placed strong emphasis on the prediction of human losses (number of casualties and injured used to estimate the needs in terms of health care and shelters in the immediate aftermath of a strong event). In contrast to these early risk modeling efforts, later studies have focused on the disruption of the serviceability of roads, telecommunications and other important lifeline systems. In the 1990s, the National Institute of Building Sciences (NIBS) developed a tool (HAZUS ®99) for the Federal Emergency Management Agency (FEMA), where the goal was to incorporate the best quantitative methodology in earthquake loss estimates. Herein, the current version of the open-source risk and loss estimation software SELENA v4.1 is presented. While using the spectral displacement-based approach (capacity spectrum method), this fully self-contained tool analytically computes the degree of damage on specific building typologies as well as the associated economic losses and number of casualties. The earthquake ground shaking estimates for SELENA v4.1 can be calculated or provided in three different ways: deterministic, probabilistic or based on near-real-time data. The main distinguishing feature of SELENA compared to other risk estimation software tools is that it is implemented in a 'logic tree' computation scheme which accounts for uncertainties of any input (e.g., scenario earthquake parameters, ground-motion prediction equations, soil models) or inventory data (e.g., building typology, capacity curves and fragility functions). The data used in the analysis is assigned with a decimal weighting factor defining the weight of the respective branch of the logic tree. The weighting of the input parameters accounts for the epistemic and aleatoric uncertainties that will always follow the necessary parameterization of the different types of input data. Like previous SELENA versions, SELENA v4.1 is coded in MATLAB which allows for easy dissemination among the scientific-technical community. Furthermore, any user has access to the source code in order to adapt, improve or refine the tool according to his or her particular needs. The handling of SELENA's current version and the provision of input data is customized for an academic environment but which can then support decision-makers of local, state and regional governmental agencies in estimating possible losses from future earthquakes.

  7. ClusterCAD: a computational platform for type I modular polyketide synthase design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.

    Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less

  8. Engineering Photon-Photon Interactions within Rubidium-Filled Waveguides

    NASA Astrophysics Data System (ADS)

    Perrella, C.; Light, P. S.; Vahid, S. Afshar; Benabid, F.; Luiten, A. N.

    2018-04-01

    Strong photon-photon interactions are a required ingredient for deterministic two-photon optical quantum logic gates. Multiphoton transitions in dense atomic vapors have been shown to be a promising avenue for producing such interactions. The strength of a multiphoton interaction can be enhanced by conducting the interaction in highly confined geometries such as small-cross-section optical waveguides. We demonstrate, both experimentally and theoretically, that the strength of such interactions scale only with the optical mode diameter, d , not d2 as might be initially expected. This weakening of the interaction arises from atomic motion inside the waveguides. We create an interaction between two optical signals, at 780 and 776 nm, using the 5 S1 /2→5 D5 /2 two-photon transition in rubidium vapor within a range of hollow-core fibers with different core sizes. The interaction strength is characterized by observing the absorption and phase shift induced on the 780-nm beam, which is in close agreement with theoretical modeling that accounts for the atomic motion inside the fibers. These observations demonstrate that transit-time effects upon multiphoton transitions are of key importance when engineering photon-photon interactions within small-cross-section waveguides that might otherwise be thought to lead to enhanced optical nonlinearity through increased intensities.

  9. Excited-State Spin Manipulation and Intrinsic Nuclear Spin Memory using Single Nitrogen-Vacancy Centers in Diamond

    NASA Astrophysics Data System (ADS)

    Fuchs, Gregory

    2011-03-01

    Nitrogen vacancy (NV) center spins in diamond have emerged as a promising solid-state system for quantum information processing and precision metrology at room temperature. Understanding and developing the built-in resources of this defect center for quantum logic and memory is critical to achieving these goals. In the first case, we use nanosecond duration microwave manipulation to study the electronic spin of single NV centers in their orbital excited-state (ES). We demonstrate ES Rabi oscillations and use multi-pulse resonant control to differentiate between phonon-induced dephasing, orbital relaxation, and coherent electron-nuclear interactions. A second resource, the nuclear spin of the intrinsic nitrogen atom, may be an ideal candidate for a quantum memory due to both the long coherence of nuclear spins and their deterministic presence. We investigate coherent swaps between the NV center electronic spin state and the nuclear spin state of nitrogen using Landau-Zener transitions performed outside the asymptotic regime. The swap gates are generated using lithographically fabricated waveguides that form a high-bandwidth, two-axis vector magnet on the diamond substrate. These experiments provide tools for coherently manipulating and storing quantum information in a scalable solid-state system at room temperature. We gratefully acknowledge support from AFOSR, ARO, and DARPA.

  10. Experiences of the use of FOX, an intelligent agent, for programming cochlear implant sound processors in new users.

    PubMed

    Vaerenberg, Bart; Govaerts, Paul J; de Ceulaer, Geert; Daemers, Kristin; Schauwers, Karen

    2011-01-01

    This report describes the application of the software tool "Fitting to Outcomes eXpert" (FOX) in programming the cochlear implant (CI) processor in new users. FOX is an intelligent agent to assist in the programming of CI processors. The concept of FOX is to modify maps on the basis of specific outcome measures, achieved using heuristic logic and based on a set of deterministic "rules". A prospective study was conducted on eight consecutive CI-users with a follow-up of three months. Eight adult subjects with postlingual deafness were implanted with the Advanced Bionics HiRes90k device. The implants were programmed using FOX, running a set of rules known as Eargroup's EG0910 advice, which features a set of "automaps". The protocol employed for the initial 3 months is presented, with description of the map modifications generated by FOX and the corresponding psychoacoustic test results. The 3 month median results show 25 dBHL as PTA, 77% (55 dBSPL) and 71% (70 dBSPL) phoneme score at speech audiometry and loudness scaling in or near to the normal zone at different frequencies. It is concluded that this approach is feasible to start up CI fitting and yields good outcome.

  11. ClusterCAD: a computational platform for type I modular polyketide synthase design

    DOE PAGES

    Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.; ...

    2017-10-11

    Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less

  12. Information flow and causality as rigorous notions ab initio

    NASA Astrophysics Data System (ADS)

    Liang, X. San

    2016-11-01

    Information flow or information transfer the widely applicable general physics notion can be rigorously derived from first principles, rather than axiomatically proposed as an ansatz. Its logical association with causality is firmly rooted in the dynamical system that lies beneath. The principle of nil causality that reads, an event is not causal to another if the evolution of the latter is independent of the former, which transfer entropy analysis and Granger causality test fail to verify in many situations, turns out to be a proven theorem here. Established in this study are the information flows among the components of time-discrete mappings and time-continuous dynamical systems, both deterministic and stochastic. They have been obtained explicitly in closed form, and put to applications with the benchmark systems such as the Kaplan-Yorke map, Rössler system, baker transformation, Hénon map, and stochastic potential flow. Besides unraveling the causal relations as expected from the respective systems, some of the applications show that the information flow structure underlying a complex trajectory pattern could be tractable. For linear systems, the resulting remarkably concise formula asserts analytically that causation implies correlation, while correlation does not imply causation, providing a mathematical basis for the long-standing philosophical debate over causation versus correlation.

  13. Formal analysis and evaluation of the back-off procedure in IEEE802.11P VANET

    NASA Astrophysics Data System (ADS)

    Jin, Li; Zhang, Guoan; Zhu, Xiaojun

    2017-07-01

    The back-off procedure is one of the media access control technologies in 802.11P communication protocol. It plays an important role in avoiding message collisions and allocating channel resources. Formal methods are effective approaches for studying the performances of communication systems. In this paper, we establish a discrete time model for the back-off procedure. We use Markov Decision Processes (MDPs) to model the non-deterministic and probabilistic behaviors of the procedure, and use the probabilistic computation tree logic (PCTL) language to express different properties, which ensure that the discrete time model performs their basic functionality. Based on the model and PCTL specifications, we study the effect of contention window length on the number of senders in the neighborhood of given receivers, and that on the station’s expected cost required by the back-off procedure to successfully send packets. The variation of the window length may increase or decrease the maximum probability of correct transmissions within a time contention unit. We propose to use PRISM model checker to describe our proposed back-off procedure for IEEE802.11P protocol in vehicle network, and define different probability properties formulas to automatically verify the model and derive numerical results. The obtained results are helpful for justifying the values of the time contention unit.

  14. Heralded entanglement between solid-state qubits separated by three metres.

    PubMed

    Bernien, H; Hensen, B; Pfaff, W; Koolstra, G; Blok, M S; Robledo, L; Taminiau, T H; Markham, M; Twitchen, D J; Childress, L; Hanson, R

    2013-05-02

    Quantum entanglement between spatially separated objects is one of the most intriguing phenomena in physics. The outcomes of independent measurements on entangled objects show correlations that cannot be explained by classical physics. As well as being of fundamental interest, entanglement is a unique resource for quantum information processing and communication. Entangled quantum bits (qubits) can be used to share private information or implement quantum logical gates. Such capabilities are particularly useful when the entangled qubits are spatially separated, providing the opportunity to create highly connected quantum networks or extend quantum cryptography to long distances. Here we report entanglement of two electron spin qubits in diamond with a spatial separation of three metres. We establish this entanglement using a robust protocol based on creation of spin-photon entanglement at each location and a subsequent joint measurement of the photons. Detection of the photons heralds the projection of the spin qubits onto an entangled state. We verify the resulting non-local quantum correlations by performing single-shot readout on the qubits in different bases. The long-distance entanglement reported here can be combined with recently achieved initialization, readout and entanglement operations on local long-lived nuclear spin registers, paving the way for deterministic long-distance teleportation, quantum repeaters and extended quantum networks.

  15. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-08-23

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  16. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  17. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  18. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  19. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  20. Pro Free Will Priming Enhances “Risk-Taking” Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies

    PubMed Central

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854

  1. Pro Free Will Priming Enhances "Risk-Taking" Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies.

    PubMed

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.

  2. Nonlinear propagation of vector extremely short pulses in a medium of symmetric and asymmetric molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sazonov, S. V., E-mail: sazonov.sergey@gmail.com; Ustinov, N. V., E-mail: n-ustinov@mail.ru

    The nonlinear propagation of extremely short electromagnetic pulses in a medium of symmetric and asymmetric molecules placed in static magnetic and electric fields is theoretically studied. Asymmetric molecules differ in that they have nonzero permanent dipole moments in stationary quantum states. A system of wave equations is derived for the ordinary and extraordinary components of pulses. It is shown that this system can be reduced in some cases to a system of coupled Ostrovsky equations and to the equation intagrable by the method for an inverse scattering transformation, including the vector version of the Ostrovsky–Vakhnenko equation. Different types of solutionsmore » of this system are considered. Only solutions representing the superposition of periodic solutions are single-valued, whereas soliton and breather solutions are multivalued.« less

  3. Efficient level set methods for constructing wavefronts in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Tien

    2007-10-01

    Wavefront construction in geometrical optics has long faced the twin difficulties of dealing with multi-valued forms and resolution of wavefront surfaces. A recent change in viewpoint, however, has demonstrated that working in phase space on bicharacteristic strips using eulerian methods can bypass both difficulties. The level set method for interface dynamics makes a suitable choice for the eulerian method. Unfortunately, in three-dimensional space, the setting of interest for most practical applications, the advantages of this method are largely offset by a new problem: the high dimension of phase space. In this work, we present new types of level set algorithms that remove this obstacle and demonstrate their abilities to accurately construct wavefronts under high resolution. These results propel the level set method forward significantly as a competitive approach in geometrical optics under realistic conditions.

  4. Graphical representation of robot grasping quality measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varma, V.; Tasch, U.

    1993-11-01

    When an object is held by a multi-fingered hand, the values of the contact forces can be multivalued. An objective function, when used in conjunction with the frictional and geometric constraints of the grasp, can however, give a unique set of finger force values. The selection of the objective function in determining the finger forces is dependent on the type of grasp required, the material properties of the object, and the limitations of the robot fingers. In this paper several optimization functions are studied and their merits highlighted. A graphical representation of the finger force values and the objective functionmore » is introduced that enable one in selecting and comparing various grasping configurations. The impending motion of the object at different torque and finger force values are determined by observing the normalized coefficient of friction plots.« less

  5. Van der Waals model for phase transitions in thermoresponsive surface films.

    PubMed

    McCoy, John D; Curro, John G

    2009-05-21

    Phase transitions in polymeric surface films are studied with a simple model based on the van der Waals equation of state. Each chain is modeled by a single bead attached to the surface by an entropic-Hooke's law spring. The surface coverage is controlled by adjusting the chemical potential, and the equilibrium density profile is calculated with density functional theory. The interesting feature of this model is the multivalued nature of the density profile seen at low temperature. This van der Waals loop behavior is resolved with a Maxwell construction between a high-density phase near the wall and a low-density phase in a "vertical" phase transition. Signatures of the phase transition in experimentally measurable quantities are then found. Numerical calculations are presented for isotherms of surface pressure, for the Poisson ratio, and for the swelling ratio.

  6. A computational exploration of the McCoy-Tracy-Wu solutions of the third Painlevé equation

    NASA Astrophysics Data System (ADS)

    Fasondini, Marco; Fornberg, Bengt; Weideman, J. A. C.

    2018-01-01

    The method recently developed by the authors for the computation of the multivalued Painlevé transcendents on their Riemann surfaces (Fasondini et al., 2017) is used to explore families of solutions to the third Painlevé equation that were identified by McCoy et al. (1977) and which contain a pole-free sector. Limiting cases, in which the solutions are singular functions of the parameters, are also investigated and it is shown that a particular set of limiting solutions is expressible in terms of special functions. Solutions that are single-valued, logarithmically (infinitely) branched and algebraically branched, with any number of distinct sheets, are encountered. The algebraically branched solutions have multiple pole-free sectors on their Riemann surfaces that are accounted for by using asymptotic formulae and Bäcklund transformations.

  7. Future directions: Integrated resource planning

    NASA Astrophysics Data System (ADS)

    Bauer, D. C.; Eto, J.

    Integrated resource planning or IRP is the process for integrating supply- and demand-side resources to provide energy services at a cost that balances the interests of all stakeholders. It now is the resource planning process used by electric utilities in over 30 states. The goals of IRP have evolved from least cost planning and encouragement of demand-side management to broader, more complex issues including core competitive business activity, risk management and sharing, accounting for externalities, and fuel switching between gas and electricity. IRP processes are being extended to other interior regions of the country, to non-investor owned utilities, and to regional (rather than individual utility) planning bases, and to other fuels (natural gas). The comprehensive, multi-valued, and public reasoning characteristics of IRP could be extended to applications beyond energy, e.g., transportation, surface water management, and health care in ways suggested.

  8. Ion implantation for deterministic single atom devices

    NASA Astrophysics Data System (ADS)

    Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.

    2017-12-01

    We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  9. Counterfactual Quantum Deterministic Key Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Wang, Jian; Tang, Chao-Jing

    2013-01-01

    We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.

  10. Ion implantation for deterministic single atom devices

    DOE PAGES

    Pacheco, J. L.; Singh, M.; Perry, D. L.; ...

    2017-12-04

    Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  11. Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Lee, Kim Fook; Kumar, Prem

    2007-09-15

    By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.

  12. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  13. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  15. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  16. Apparatus for fixing latency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, David R; Bartholomew, David B; Moon, Justin

    2009-09-08

    An apparatus for fixing computational latency within a deterministic region on a network comprises a network interface modem, a high priority module and at least one deterministic peripheral device. The network interface modem is in communication with the network. The high priority module is in communication with the network interface modem. The at least one deterministic peripheral device is connected to the high priority module. The high priority module comprises a packet assembler/disassembler, and hardware for performing at least one operation. Also disclosed is an apparatus for executing at least one instruction on a downhole device within a deterministic region,more » the apparatus comprising a control device, a downhole network, and a downhole device. The control device is near the surface of a downhole tool string. The downhole network is integrated into the tool string. The downhole device is in communication with the downhole network.« less

  17. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  18. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  19. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  20. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-14

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  1. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  2. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  3. Minimum energy dissipation required for a logically irreversible operation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Naoki; Yoshikawa, Nobuyuki

    2018-01-01

    According to Landauer's principle, the minimum heat emission required for computing is linked to logical entropy, or logical reversibility. The validity of Landauer's principle has been investigated for several decades and was finally demonstrated in recent experiments by showing that the minimum heat emission is associated with the reduction in logical entropy during a logically irreversible operation. Although the relationship between minimum heat emission and logical reversibility is being revealed, it is not clear how much free energy is required to be dissipated for a logically irreversible operation. In the present study, in order to reveal the connection between logical reversibility and free energy dissipation, we numerically demonstrated logically irreversible protocols using adiabatic superconductor logic. The calculation results of work during the protocol showed that, while the minimum heat emission conforms to Landauer's principle, the free energy dissipation can be arbitrarily reduced by performing the protocol quasistatically. The above results show that logical reversibility is not associated with thermodynamic reversibility, and that heat is not only emitted from logic devices but also absorbed by logic devices. We also formulated the heat emission from adiabatic superconductor logic during a logically irreversible operation at a finite operation speed.

  4. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  5. Method and Apparatus for Simultaneous Processing of Multiple Functions

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian (Inventor); Andrei, Radu (Inventor)

    2017-01-01

    Electronic logic gates that operate using N logic state levels, where N is greater than 2, and methods of operating such gates. The electronic logic gates operate according to truth tables. At least two input signals each having a logic state that can range over more than two logic states are provided to the logic gates. The logic gates each provide an output signal that can have one of N logic states. Examples of gates described include NAND/NAND gates having two inputs A and B and NAND/NAND gates having three inputs A, B, and C, where A, B and C can take any of four logic states. Systems using such gates are described, and their operation illustrated. Optical logic gates that operate using N logic state levels are also described.

  6. Method and Apparatus for Simultaneous Processing of Multiple Functions

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian (Inventor); Andrei, Radu (Inventor); Zhu, David (Inventor); Mojarradi, Mohammad Mehdi (Inventor); Vo, Tuan A. (Inventor)

    2015-01-01

    Electronic logic gates that operate using N logic state levels, where N is greater than 2, and methods of operating such gates. The electronic logic gates operate according to truth tables. At least two input signals each having a logic state that can range over more than two logic states are provided to the logic gates. The logic gates each provide an output signal that can have one of N logic states. Examples of gates described include NAND/NAND gates having two inputs A and B and NAND/NAND gates having three inputs A, B, and C, where A, B and C can take any of four logic states. Systems using such gates are described, and their operation illustrated. Optical logic gates that operate using N logic state levels are also described.

  7. Using Abductive Research Logic: "The Logic of Discovery", to Construct a Rigorous Explanation of Amorphous Evaluation Findings

    ERIC Educational Resources Information Center

    Levin-Rozalis, Miri

    2010-01-01

    Background: Two kinds of research logic prevail in scientific research: deductive research logic and inductive research logic. However, both fail in the field of evaluation, especially evaluation conducted in unfamiliar environments. Purpose: In this article I wish to suggest the application of a research logic--"abduction"--"the logic of…

  8. Application of linear logic to simulation

    NASA Astrophysics Data System (ADS)

    Clarke, Thomas L.

    1998-08-01

    Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.

  9. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  10. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  11. Deterministic and efficient quantum cryptography based on Bell's theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Zengbing; Pan Jianwei; Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, 69120 Heidelberg

    2006-05-15

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.

  12. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  13. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  14. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  15. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications.

    PubMed

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-10

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  16. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  17. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  18. Mixing Single Scattering Properties in Vector Radiative Transfer for Deterministic and Stochastic Solutions

    NASA Astrophysics Data System (ADS)

    Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.

    2016-12-01

    Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.

  19. Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit. REL 2015-057

    ERIC Educational Resources Information Center

    Shakman, Karen; Rodriguez, Sheila M.

    2015-01-01

    The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to…

  20. Pass-transistor very large scale integration

    NASA Technical Reports Server (NTRS)

    Maki, Gary K. (Inventor); Bhatia, Prakash R. (Inventor)

    2004-01-01

    Logic elements are provided that permit reductions in layout size and avoidance of hazards. Such logic elements may be included in libraries of logic cells. A logical function to be implemented by the logic element is decomposed about logical variables to identify factors corresponding to combinations of the logical variables and their complements. A pass transistor network is provided for implementing the pass network function in accordance with this decomposition. The pass transistor network includes ordered arrangements of pass transistors that correspond to the combinations of variables and complements resulting from the logical decomposition. The logic elements may act as selection circuits and be integrated with memory and buffer elements.

  1. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  2. People Like Logical Truth: Testing the Intuitive Detection of Logical Value in Basic Propositions.

    PubMed

    Nakamura, Hiroko; Kawaguchi, Jun

    2016-01-01

    Recent studies on logical reasoning have suggested that people are intuitively aware of the logical validity of syllogisms or that they intuitively detect conflict between heuristic responses and logical norms via slight changes in their feelings. According to logical intuition studies, logically valid or heuristic logic no-conflict reasoning is fluently processed and induces positive feelings without conscious awareness. One criticism states that such effects of logicality disappear when confounding factors such as the content of syllogisms are controlled. The present study used abstract propositions and tested whether people intuitively detect logical value. Experiment 1 presented four logical propositions (conjunctive, biconditional, conditional, and material implications) regarding a target case and asked the participants to rate the extent to which they liked the statement. Experiment 2 tested the effects of matching bias, as well as intuitive logic, on the reasoners' feelings by manipulating whether the antecedent or consequent (or both) of the conditional was affirmed or negated. The results showed that both logicality and matching bias affected the reasoners' feelings, and people preferred logically true targets over logically false ones for all forms of propositions. These results suggest that people intuitively detect what is true from what is false during abstract reasoning. Additionally, a Bayesian mixed model meta-analysis of conditionals indicated that people's intuitive interpretation of the conditional "if p then q" fits better with the conditional probability, q given p.

  3. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  4. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  5. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  6. Cognitive Diagnostic Analysis Using Hierarchically Structured Skills

    ERIC Educational Resources Information Center

    Su, Yu-Lan

    2013-01-01

    This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…

  7. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  8. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  9. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  10. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  11. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  12. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  13. Fuzzy Versions of Epistemic and Deontic Logic

    NASA Technical Reports Server (NTRS)

    Gounder, Ramasamy S.; Esterline, Albert C.

    1998-01-01

    Epistemic and deontic logics are modal logics, respectively, of knowledge and of the normative concepts of obligation, permission, and prohibition. Epistemic logic is useful in formalizing systems of communicating processes and knowledge and belief in AI (Artificial Intelligence). Deontic logic is useful in computer science wherever we must distinguish between actual and ideal behavior, as in fault tolerance and database integrity constraints. We here discuss fuzzy versions of these logics. In the crisp versions, various axioms correspond to various properties of the structures used in defining the semantics of the logics. Thus, any axiomatic theory will be characterized not only by its axioms but also by the set of properties holding of the corresponding semantic structures. Fuzzy logic does not proceed with axiomatic systems, but fuzzy versions of the semantic properties exist and can be shown to correspond to some of the axioms for the crisp systems in special ways that support dependency networks among assertions in a modal domain. This in turn allows one to implement truth maintenance systems. For the technical development of epistemic logic, and for that of deontic logic. To our knowledge, we are the first to address fuzzy epistemic and fuzzy deontic logic explicitly and to consider the different systems and semantic properties available. We give the syntax and semantics of epistemic logic and discuss the correspondence between axioms of epistemic logic and properties of semantic structures. The same topics are covered for deontic logic. Fuzzy epistemic and fuzzy deontic logic discusses the relationship between axioms and semantic properties for these logics. Our results can be exploited in truth maintenance systems.

  14. A new approach to the effect of sound on vortex dynamics

    NASA Technical Reports Server (NTRS)

    Lund, Fernando; Zabusky, Norman J.

    1987-01-01

    Analytical results are presented on the effect of acoustic radiation on three-dimensional vortex motions in a homogeneous, slightly compressible, inviscid fluid. The flow is considered as linear and irrotational everywhere except inside a very thin cylindrical core region around the vortex filament. In the outside region, a velocity potential is introduced that must be multivalued, and it is shown how to compute this scalar potential if the motion of the vortex filament is prescribed. To find the motion of this singularity in an external potential flow, a variational principle involving a volume integral that must exclude the singular region is considered. A functional of the external potential and vortex filament position is obtained whose extrema give equations to determine the sought-after evolution. Thus, a generalization of the Biot-Savart law to flows with constant sound speed at low Mach number is obtained.

  15. Extension of the root-locus method to a certain class of fractional-order systems.

    PubMed

    Merrikh-Bayat, Farshad; Afshar, Mahdi; Karimi-Ghartemani, Masoud

    2009-01-01

    In this paper, the well-known root-locus method is developed for the special subset of linear time-invariant systems commonly known as fractional-order systems. Transfer functions of these systems are rational functions with polynomials of rational powers of the Laplace variable s. Such systems are defined on a Riemann surface because of their multi-valued nature. A set of rules for plotting the root loci on the first Riemann sheet is presented. The important features of the classical root-locus method such as asymptotes, roots condition on the real axis and breakaway points are extended to the fractional case. It is also shown that the proposed method can assess the closed-loop stability of fractional-order systems in the presence of a varying gain in the loop. Moreover, the effect of perturbation on the root loci is discussed. Three illustrative examples are presented to confirm the effectiveness of the proposed algorithm.

  16. Anonymous voting for multi-dimensional CV quantum system

    NASA Astrophysics Data System (ADS)

    Rong-Hua, Shi; Yi, Xiao; Jin-Jing, Shi; Ying, Guo; Moon-Ho, Lee

    2016-06-01

    We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. Project supported by the National Natural Science Foundation of China (Grant Nos. 61272495, 61379153, and 61401519), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130162110012), and the MEST-NRF of Korea (Grant No. 2012-002521).

  17. Blocked Force and Loading Calculations for LaRC THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.

    2007-01-01

    An analytic approach is developed to predict the performance of LaRC Thunder actuators under load and under blocked conditions. The problem is treated with the Von Karman non-linear analysis combined with a simple Raleigh-Ritz calculation. From this, shape and displacement under load combined with voltage are calculated. A method is found to calculate the blocked force vs voltage and spring force vs distance. It is found that under certain conditions, the blocked force and displacement is almost linear with voltage. It is also found that the spring force is multivalued and has at least one bifurcation point. This bifurcation point is where the device collapses under load and locks to a different bending solution. This occurs at a particular critical load. It is shown this other bending solution has a reduced amplitude and is proportional to the original amplitude times the square of the aspect ratio.

  18. Non-Market Values in a Cost-Benefit World: Evidence from a Choice Experiment.

    PubMed

    Eppink, Florian V; Winden, Matthew; Wright, Will C C; Greenhalgh, Suzie

    2016-01-01

    In support of natural resource and ecosystem service policy, monetary value estimates are often presented to decision makers along with other types of information. There is some evidence that, presented with such 'mixed' information, people prioritise monetary over non-monetary information. We conduct a discrete choice experiment among New Zealand decision makers in which we manipulate the information presented to participants. We find that providing explicit monetary information strengthens the pursuit of economic benefits as well as the avoidance of environmental damage. Cultural impacts, of which we provided only qualitative descriptions, did not affect respondents' choices. Our study provides further evidence that concerns regarding the use of monetary information in decisions with complex, multi-value impacts are valid. Further research is needed to validate our results and find ways to reduce any bias in monetary and non-market information.

  19. Time-delayed autosynchronous swarm control.

    PubMed

    Biggs, James D; Bennet, Derek J; Dadzie, S Kokou

    2012-01-01

    In this paper a general Morse potential model of self-propelling particles is considered in the presence of a time-delayed term and a spring potential. It is shown that the emergent swarm behavior is dependent on the delay term and weights of the time-delayed function, which can be set to induce a stationary swarm, a rotating swarm with uniform translation, and a rotating swarm with a stationary center of mass. An analysis of the mean field equations shows that without a spring potential the motion of the center of mass is determined explicitly by a multivalued function. For a nonzero spring potential the swarm converges to a vortex formation about a stationary center of mass, except at discrete bifurcation points where the center of mass will periodically trace an ellipse. The analytical results defining the behavior of the center of mass are shown to correspond with the numerical swarm simulations.

  20. Non-Market Values in a Cost-Benefit World: Evidence from a Choice Experiment

    PubMed Central

    Eppink, Florian V.; Winden, Matthew; Wright, Will C. C.; Greenhalgh, Suzie

    2016-01-01

    In support of natural resource and ecosystem service policy, monetary value estimates are often presented to decision makers along with other types of information. There is some evidence that, presented with such ‘mixed’ information, people prioritise monetary over non-monetary information. We conduct a discrete choice experiment among New Zealand decision makers in which we manipulate the information presented to participants. We find that providing explicit monetary information strengthens the pursuit of economic benefits as well as the avoidance of environmental damage. Cultural impacts, of which we provided only qualitative descriptions, did not affect respondents’ choices. Our study provides further evidence that concerns regarding the use of monetary information in decisions with complex, multi-value impacts are valid. Further research is needed to validate our results and find ways to reduce any bias in monetary and non-market information. PMID:27783657

  1. Nonlinear hyperbolic theory of thermal waves in metals

    NASA Technical Reports Server (NTRS)

    Wilhelm, H. E.; Choi, S. H.

    1975-01-01

    A closed-form solution for cylindrical thermal waves in metals is given based on the nonlinear hyperbolic system of energy-conservation and heat-flux relaxation equations. It is shown that heat released from a line source propagates radially outward with finite speed in the form of a thermal wave which exhibits a discontinuous wave front. Unique nonlinear thermal-wave solutions exist up to a critical amount of driving energy, i.e., for larger energy releases, the thermal flow becomes multivalued (occurrence of shock waves). By comparison, it is demonstrated that the parabolic thermal-wave theory gives, in general, a misleading picture of the profile and propagation of thermal waves and leads to physical (infinite speed of heat propagation) and mathematical (divergent energy integrals) difficulties. Attention is drawn to the importance of temporal heat-flux relaxation for the physical understanding of fast transient processes such as thermal waves and more general explosions and implosions.

  2. Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction.

    PubMed

    Tanaka, Gouhei; Aihara, Kazuyuki

    2009-09-01

    A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.

  3. On the Dynamical Regimes of Pattern-Accelerated Electroconvection.

    PubMed

    Davidson, Scott M; Wessling, Matthias; Mani, Ali

    2016-03-03

    Recent research has established that electroconvection can enhance ion transport at polarized surfaces such as membranes and electrodes where it would otherwise be limited by diffusion. The onset of such overlimiting transport can be influenced by the surface topology of the ion selective membranes as well as inhomogeneities in their electrochemical properties. However, there is little knowledge regarding the mechanisms through which these surface variations promote transport. We use high-resolution direct numerical simulations to develop a comprehensive analysis of electroconvective flows generated by geometric patterns of impermeable stripes and investigate their potential to regularize electrokinetic instabilities. Counterintuitively, we find that reducing the permeable area of an ion exchange membrane, with appropriate patterning, increases the overall ion transport rate by up to 80%. In addition, we present analysis of nonpatterned membranes, and find a novel regime of electroconvection where a multivalued current is possible due to the coexistence of multiple convective states.

  4. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  5. Equation of state for shock compression of distended solids

    NASA Astrophysics Data System (ADS)

    Grady, Dennis; Fenton, Gregg; Vogler, Tracy

    2014-05-01

    Shock Hugoniot data for full-density and porous compounds of boron carbide, silicon dioxide, tantalum pentoxide, uranium dioxide and playa alluvium are investigated for the purpose of equation-of-state representation of intense shock compression. Complications of multivalued Hugoniot behavior characteristic of highly distended solids are addressed through the application of enthalpy-based equations of state of the form originally proposed by Rice and Walsh in the late 1950's. Additive measures of cold and thermal pressure intrinsic to the Mie-Gruneisen EOS framework is replaced by isobaric additive functions of the cold and thermal specific volume components in the enthalpy-based formulation. Additionally, experimental evidence reveals enhancement of shock-induced phase transformation on the Hugoniot with increasing levels of initial distension for silicon dioxide, uranium dioxide and possibly boron carbide. Methods for addressing this experimentally observed feature of the shock compression are incorporated into the EOS model.

  6. Equation of State for Shock Compression of High Distension Solids

    NASA Astrophysics Data System (ADS)

    Grady, Dennis

    2013-06-01

    Shock Hugoniot data for full-density and porous compounds of boron carbide, silicon dioxide, tantalum pentoxide, uranium dioxide and playa alluvium are investigated for the purpose of equation-of-state representation of intense shock compression. Complications of multivalued Hugoniot behavior characteristic of highly distended solids are addressed through the application of enthalpy-based equations of state of the form originally proposed by Rice and Walsh in the late 1950's. Additivity of cold and thermal pressure intrinsic to the Mie-Gruneisen EOS framework is replaced by isobaric additive functions of the cold and thermal specific volume components in the enthalpy-based formulation. Additionally, experimental evidence supports acceleration of shock-induced phase transformation on the Hugoniot with increasing levels of initial distention for silicon dioxide, uranium dioxide and possibly boron carbide. Methods for addressing this experimentally observed facet of the shock compression are introduced into the EOS model.

  7. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  8. People Like Logical Truth: Testing the Intuitive Detection of Logical Value in Basic Propositions

    PubMed Central

    2016-01-01

    Recent studies on logical reasoning have suggested that people are intuitively aware of the logical validity of syllogisms or that they intuitively detect conflict between heuristic responses and logical norms via slight changes in their feelings. According to logical intuition studies, logically valid or heuristic logic no-conflict reasoning is fluently processed and induces positive feelings without conscious awareness. One criticism states that such effects of logicality disappear when confounding factors such as the content of syllogisms are controlled. The present study used abstract propositions and tested whether people intuitively detect logical value. Experiment 1 presented four logical propositions (conjunctive, biconditional, conditional, and material implications) regarding a target case and asked the participants to rate the extent to which they liked the statement. Experiment 2 tested the effects of matching bias, as well as intuitive logic, on the reasoners’ feelings by manipulating whether the antecedent or consequent (or both) of the conditional was affirmed or negated. The results showed that both logicality and matching bias affected the reasoners’ feelings, and people preferred logically true targets over logically false ones for all forms of propositions. These results suggest that people intuitively detect what is true from what is false during abstract reasoning. Additionally, a Bayesian mixed model meta-analysis of conditionals indicated that people’s intuitive interpretation of the conditional “if p then q” fits better with the conditional probability, q given p. PMID:28036402

  9. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  10. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  11. Radiation tolerant combinational logic cell

    NASA Technical Reports Server (NTRS)

    Maki, Gary R. (Inventor); Whitaker, Sterling (Inventor); Gambles, Jody W. (Inventor)

    2009-01-01

    A system has a reduced sensitivity to Single Event Upset and/or Single Event Transient(s) compared to traditional logic devices. In a particular embodiment, the system includes an input, a logic block, a bias stage, a state machine, and an output. The logic block is coupled to the input. The logic block is for implementing a logic function, receiving a data set via the input, and generating a result f by applying the data set to the logic function. The bias stage is coupled to the logic block. The bias stage is for receiving the result from the logic block and presenting it to the state machine. The state machine is coupled to the bias stage. The state machine is for receiving, via the bias stage, the result generated by the logic block. The state machine is configured to retain a state value for the system. The state value is typically based on the result generated by the logic block. The output is coupled to the state machine. The output is for providing the value stored by the state machine. Some embodiments of the invention produce dual rail outputs Q and Q'. The logic block typically contains combinational logic and is similar, in size and transistor configuration, to a conventional CMOS combinational logic design. However, only a very small portion of the circuits of these embodiments, is sensitive to Single Event Upset and/or Single Event Transients.

  12. Reversible logic gates on Physarum Polycephalum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Andrew

    2015-03-10

    In this paper, we consider possibilities how to implement asynchronous sequential logic gates and quantum-style reversible logic gates on Physarum polycephalum motions. We show that in asynchronous sequential logic gates we can erase information because of uncertainty in the direction of plasmodium propagation. Therefore quantum-style reversible logic gates are more preferable for designing logic circuits on Physarum polycephalum.

  13. Applications of Logic Coverage Criteria and Logic Mutation to Software Testing

    ERIC Educational Resources Information Center

    Kaminski, Garrett K.

    2011-01-01

    Logic is an important component of software. Thus, software logic testing has enjoyed significant research over a period of decades, with renewed interest in the last several years. One approach to detecting logic faults is to create and execute tests that satisfy logic coverage criteria. Another approach to detecting faults is to perform mutation…

  14. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  15. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  16. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  17. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  18. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  19. Boolean logic tree of graphene-based chemical system for molecular computation and intelligent molecular search query.

    PubMed

    Huang, Wei Tao; Luo, Hong Qun; Li, Nian Bing

    2014-05-06

    The most serious, and yet unsolved, problem of constructing molecular computing devices consists in connecting all of these molecular events into a usable device. This report demonstrates the use of Boolean logic tree for analyzing the chemical event network based on graphene, organic dye, thrombin aptamer, and Fenton reaction, organizing and connecting these basic chemical events. And this chemical event network can be utilized to implement fluorescent combinatorial logic (including basic logic gates and complex integrated logic circuits) and fuzzy logic computing. On the basis of the Boolean logic tree analysis and logic computing, these basic chemical events can be considered as programmable "words" and chemical interactions as "syntax" logic rules to construct molecular search engine for performing intelligent molecular search query. Our approach is helpful in developing the advanced logic program based on molecules for application in biosensing, nanotechnology, and drug delivery.

  20. A Unit on Deterministic Chaos for Student Teachers

    ERIC Educational Resources Information Center

    Stavrou, D.; Assimopoulos, S.; Skordoulis, C.

    2013-01-01

    A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…

  1. A Deterministic Annealing Approach to Clustering AIRS Data

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  2. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 2 PHARMACOKINETIC MODELING

    EPA Science Inventory

    The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...

  3. Integrability and Chaos: The Classical Uncertainty

    ERIC Educational Resources Information Center

    Masoliver, Jaume; Ros, Ana

    2011-01-01

    In recent years there has been a considerable increase in the publishing of textbooks and monographs covering what was formerly known as random or irregular deterministic motion, now referred to as deterministic chaos. There is still substantial interest in a matter that is included in many graduate and even undergraduate courses on classical…

  4. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  5. Contemporary Genetics for Gender Researchers: Not Your Grandma's Genetics Anymore

    ERIC Educational Resources Information Center

    Salk, Rachel H.; Hyde, Janet S.

    2012-01-01

    Over the past century, much of genetics was deterministic, and feminist researchers framed justified criticisms of genetics research. However, over the past two decades, genetics research has evolved remarkably and has moved far from earlier deterministic approaches. Our article provides a brief primer on modern genetics, emphasizing contemporary…

  6. Technological Utopia, Dystopia and Ambivalence: Teaching with Social Media at a South African University

    ERIC Educational Resources Information Center

    Rambe, Patient; Nel, Liezel

    2015-01-01

    The discourse of social media adoption in higher education has often been funnelled through utopian and dystopian perspectives, which are polarised but determinist theorisations of human engagement with educational technologies. Consequently, these determinist approaches have obscured a broadened grasp of the situated, socially constructed nature…

  7. Empirical Data Fusion for Convective Weather Hazard Nowcasting

    NASA Astrophysics Data System (ADS)

    Williams, J.; Ahijevych, D.; Steiner, M.; Dettling, S.

    2009-09-01

    This paper describes a statistical analysis approach to developing an automated convective weather hazard nowcast system suitable for use by aviation users in strategic route planning and air traffic management. The analysis makes use of numerical weather prediction model fields and radar, satellite, and lightning observations and derived features along with observed thunderstorm evolution data, which are aligned using radar-derived motion vectors. Using a dataset collected during the summers of 2007 and 2008 over the eastern U.S., the predictive contributions of the various potential predictor fields are analyzed for various spatial scales, lead-times and scenarios using a technique called random forests (RFs). A minimal, skillful set of predictors is selected for each scenario requiring distinct forecast logic, and RFs are used to construct an empirical probabilistic model for each. The resulting data fusion system, which ran in real-time at the National Center for Atmospheric Research during the summer of 2009, produces probabilistic and deterministic nowcasts of the convective weather hazard and assessments of the prediction uncertainty. The nowcasts' performance and results for several case studies are presented to demonstrate the value of this approach. This research has been funded by the U.S. Federal Aviation Administration to support the development of the Consolidated Storm Prediction for Aviation (CoSPA) system, which is intended to provide convective hazard nowcasts and forecasts for the U.S. Next Generation Air Transportation System (NextGen).

  8. Preparation and measurement of three-qubit entanglement in a superconducting circuit.

    PubMed

    Dicarlo, L; Reed, M D; Sun, L; Johnson, B R; Chow, J M; Gambetta, J M; Frunzio, L; Girvin, S M; Devoret, M H; Schoelkopf, R J

    2010-09-30

    Traditionally, quantum entanglement has been central to foundational discussions of quantum mechanics. The measurement of correlations between entangled particles can have results at odds with classical behaviour. These discrepancies grow exponentially with the number of entangled particles. With the ample experimental confirmation of quantum mechanical predictions, entanglement has evolved from a philosophical conundrum into a key resource for technologies such as quantum communication and computation. Although entanglement in superconducting circuits has been limited so far to two qubits, the extension of entanglement to three, eight and ten qubits has been achieved among spins, ions and photons, respectively. A key question for solid-state quantum information processing is whether an engineered system could display the multi-qubit entanglement necessary for quantum error correction, which starts with tripartite entanglement. Here, using a circuit quantum electrodynamics architecture, we demonstrate deterministic production of three-qubit Greenberger-Horne-Zeilinger (GHZ) states with fidelity of 88 per cent, measured with quantum state tomography. Several entanglement witnesses detect genuine three-qubit entanglement by violating biseparable bounds by 830 ± 80 per cent. We demonstrate the first step of basic quantum error correction, namely the encoding of a logical qubit into a manifold of GHZ-like states using a repetition code. The integration of this encoding with decoding and error-correcting steps in a feedback loop will be the next step for quantum computing with integrated circuits.

  9. On the notion of free will in the Free Will Theorem

    NASA Astrophysics Data System (ADS)

    Landsman, Klaas

    2017-02-01

    The (Strong) Free Will Theorem (FWT) of Conway and Kochen (2009) on the one hand follows from uncontroversial parts of modern physics and elementary mathematical and logical reasoning, but on the other hand seems predicated on an undefined notion of free will (allowing physicists to "freely choose" the settings of their experiments). This makes the theorem philosophically vulnerable, especially if it is construed as a proof of indeterminism or even of libertarian free will (as Conway & Kochen suggest). However, Cator and Landsman (Foundations of Physics 44, 781-791, 2014) previously gave a reformulation of the FWT that does not presuppose indeterminism, but rather assumes a mathematically specific form of such "free choices" even in a deterministic world (based on a non-probabilistic independence assumption). In the present paper, which is a philosophical sequel to the one just mentioned, I argue that the concept of free will used in the latter version of the FWT is essentially the one proposed by Lewis (1981), also known as 'local miracle compatibilism' (of which I give a mathematical interpretation that might be of some independent interest also beyond its application to the FWT). As such, the (reformulated) FWT in my view challenges compatibilist free will à la Lewis (albeit in a contrived way via bipartite EPR-type experiments), falling short of supporting libertarian free will.

  10. Reconfigurable Optical Directed-Logic Circuits

    DTIC Science & Technology

    2015-11-20

    AFRL-AFOSR-VA-TR-2016-0053 Reconfigurable Optical Directed-Logic Circuits Jacob Robinson WILLIAM MARSH RICE UNIV HOUSTON TX Final Report 11/20/2015...2015 Reconfigurable Optical Directed-Logic Circuits FA9550-12-1-0261 FA9550-12-1-0261 Robinson, Jacob Rice University 6100 Main Street Houston...Optical Directed-Logic Circuits Jacob T. Robinson and Qianfan Xu Rice University 1. Motivation for Directed-Logic Circuits Directed-logic is

  11. Rule-Based Runtime Verification

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

  12. Specifying real-time systems with interval logic

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1988-01-01

    Pure temporal logic makes no reference to time. An interval temporal logic and an extension to that logic which includes real time constraints are described. The application of this logic by giving a specification for the well-known lift (elevator) example is demonstrated. It is shown how interval logic can be extended to include a notion of process. How the specification language and verification environment of EHDM could be enhanced to support this logic is described. A specification of the alternating bit protocol in this extended version of the specification language of EHDM is given.

  13. Low delay and area efficient soft error correction in arbitration logic

    DOEpatents

    Sugawara, Yutaka

    2013-09-10

    There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.

  14. Spintronic logic: from switching devices to computing systems

    NASA Astrophysics Data System (ADS)

    Friedman, Joseph S.

    2017-09-01

    Though numerous spintronic switching devices have been proposed or demonstrated, there has been significant difficulty in translating these advances into practical computing systems. The challenge of cascading has impeded the integration of multiple devices into a logic family, and several proposed solutions potentially overcome these challenges. Here, the cascading techniques by which the output of each spintronic device can drive the input of another device are described for several logic families, including spin-diode logic (in particular, all-carbon spin logic), complementary magnetic tunnel junction logic (CMAT), and emitter-coupled spin-transistor logic (ECSTL).

  15. Enzymatic AND logic gates operated under conditions characteristic of biomedical applications.

    PubMed

    Melnikov, Dmitriy; Strack, Guinevere; Zhou, Jian; Windmiller, Joshua Ray; Halámek, Jan; Bocharova, Vera; Chuang, Min-Chieh; Santhosh, Padmanabhan; Privman, Vladimir; Wang, Joseph; Katz, Evgeny

    2010-09-23

    Experimental and theoretical analyses of the lactate dehydrogenase and glutathione reductase based enzymatic AND logic gates in which the enzymes and their substrates serve as logic inputs are performed. These two systems are examples of the novel, previously unexplored class of biochemical logic gates that illustrate potential biomedical applications of biochemical logic. They are characterized by input concentrations at logic 0 and 1 states corresponding to normal and pathophysiological conditions. Our analysis shows that the logic gates under investigation have similar noise characteristics. Both significantly amplify random noise present in inputs; however, we establish that for realistic widths of the input noise distributions, it is still possible to differentiate between the logic 0 and 1 states of the output. This indicates that reliable detection of pathophysiological conditions is indeed possible with such enzyme logic systems.

  16. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadowski, Greg

    In one form, a logic circuit includes an asynchronous logic circuit, a synchronous logic circuit, and an interface circuit coupled between the asynchronous logic circuit and the synchronous logic circuit. The asynchronous logic circuit has a plurality of asynchronous outputs for providing a corresponding plurality of asynchronous signals. The synchronous logic circuit has a plurality of synchronous inputs corresponding to the plurality of asynchronous outputs, a stretch input for receiving a stretch signal, and a clock output for providing a clock signal. The synchronous logic circuit provides the clock signal as a periodic signal but prolongs a predetermined state ofmore » the clock signal while the stretch signal is active. The asynchronous interface detects whether metastability could occur when latching any of the plurality of the asynchronous outputs of the asynchronous logic circuit using said clock signal, and activates the stretch signal while the metastability could occur.« less

  18. Deterministic chaos in an ytterbium-doped mode-locked fiber laser

    NASA Astrophysics Data System (ADS)

    Mélo, Lucas B. A.; Palacios, Guillermo F. R.; Carelli, Pedro V.; Acioli, Lúcio H.; Rios Leite, José R.; de Miranda, Marcio H. G.

    2018-05-01

    We experimentally study the nonlinear dynamics of a femtosecond ytterbium doped mode-locked fiber laser. With the laser operating in the pulsed regime a route to chaos is presented, starting from stable mode-locking, period two, period four, chaos and period three regimes. Return maps and bifurcation diagrams were extracted from time series for each regime. The analysis of the time series with the laser operating in the quasi mode-locked regime presents deterministic chaos described by an unidimensional Rossler map. A positive Lyapunov exponent $\\lambda = 0.14$ confirms the deterministic chaos of the system. We suggest an explanation about the observed map by relating gain saturation and intra-cavity loss.

  19. The viability of ADVANTG deterministic method for synthetic radiography generation

    NASA Astrophysics Data System (ADS)

    Bingham, Andrew; Lee, Hyoung K.

    2018-07-01

    Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.

  20. Exploring the institutional logics of health professions education scholarship units.

    PubMed

    Varpio, Lara; O'Brien, Bridget; Hu, Wendy; Ten Cate, Olle; Durning, Steven J; van der Vleuten, Cees; Gruppen, Larry; Irby, David; Humphrey-Murto, Susan; Hamstra, Stanley J

    2017-07-01

    Although health professions education scholarship units (HPESUs) share a commitment to the production and dissemination of rigorous educational practices and research, they are situated in many different contexts and have a wide range of structures and functions. In this study, the authors explore the institutional logics common across HPESUs, and how these logics influence the organisation and activities of HPESUs. The authors analysed interviews with HPESU leaders in Canada (n = 12), Australia (n = 21), New Zealand (n = 3) and the USA (n = 11). Using an iterative process, they engaged in inductive and deductive analyses to identify institutional logics across all participating HPESUs. They explored the contextual factors that influence how these institutional logics impact each HPESU's structure and function. Participants identified three institutional logics influencing the organisational structure and functions of an HPESU: (i) the logic of financial accountability; (ii) the logic of a cohesive education continuum, and (iii) the logic of academic research, service and teaching. Although most HPESUs embodied all three logics, the power of the logics varied among units. The relative power of each logic influenced leaders' decisions about how members of the unit allocate their time, and what kinds of scholarly contribution and product are valued by the HPESU. Identifying the configuration of these three logics within and across HPESUs provides insights into the reasons why individual units are structured and function in particular ways. Having a common language in which to discuss these logics can enhance transparency, facilitate evaluation, and help leaders select appropriate indicators of HPESU success. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  1. The Temporal Logic of the Tower Chief System

    NASA Technical Reports Server (NTRS)

    Hazelton, Lyman R., Jr.

    1990-01-01

    The purpose is to describe the logic used in the reasoning scheme employed in the Tower Chief system, a runway configuration management system. First, a review of classical logic is given. Defensible logics, truth maintenance, default logic, temporally dependent propositions, and resource allocation and planning are discussed.

  2. Digital design using selection operations

    NASA Technical Reports Server (NTRS)

    Miles, Lowell H. (Inventor); Whitaker, Sterling R. (Inventor); Cameron, Eric G. (Inventor)

    2004-01-01

    A digital integrated circuit chip is designed by identifying a logical structure to be implemented. This logical structure is represented in terms of a logical operations, at least 5% of which include selection operations. A determination is made of logic cells that correspond to an implementation of these logical operations.

  3. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    EPA Science Inventory

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  4. Seed availability constrains plant species sorting along a soil fertility gradient

    Treesearch

    Bryan L. Foster; Erin J. Questad; Cathy D. Collins; Cheryl A. Murphy; Timothy L. Dickson; Val H. Smith

    2011-01-01

    1. Spatial variation in species composition within and among communities may be caused by deterministic, niche-based species sorting in response to underlying environmental heterogeneity as well as by stochastic factors such as dispersal limitation and variable species pools. An important goal in ecology is to reconcile deterministic and stochastic perspectives of...

  5. The Role of Probability and Intentionality in Preschoolers' Causal Generalizations

    ERIC Educational Resources Information Center

    Sobel, David M.; Sommerville, Jessica A.; Travers, Lea V.; Blumenthal, Emily J.; Stoddard, Emily

    2009-01-01

    Three experiments examined whether preschoolers recognize that the causal properties of objects generalize to new members of the same set given either deterministic or probabilistic data. Experiment 1 found that 3- and 4-year-olds were able to make such a generalization given deterministic data but were at chance when they observed probabilistic…

  6. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    ERIC Educational Resources Information Center

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  7. CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.

    2006-01-01

    This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.

  8. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  9. Unsteady Flows in a Single-Stage Transonic Axial-Flow Fan Stator Row. Ph.D. Thesis - Iowa State Univ.

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    1986-01-01

    Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.

  10. Precision production: enabling deterministic throughput for precision aspheres with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  11. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  12. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  13. Cascading of molecular logic gates for advanced functions: a self-reporting, activatable photosensitizer.

    PubMed

    Erbas-Cakmak, Sundus; Akkaya, Engin U

    2013-10-18

    Logical progress: Independent molecular logic gates have been designed and characterized. Then, the individual molecular logic gates were coerced to work together within a micelle. Information relay between the two logic gates was achieved through the intermediacy of singlet oxygen. Working together, these concatenated logic gates result in a self-reporting and activatable photosensitizer. GSH=glutathione. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Construction of a fuzzy and Boolean logic gates based on DNA.

    PubMed

    Zadegan, Reza M; Jepsen, Mette D E; Hildebrandt, Lasse L; Birkedal, Victoria; Kjems, Jørgen

    2015-04-17

    Logic gates are devices that can perform logical operations by transforming a set of inputs into a predictable single detectable output. The hybridization properties, structure, and function of nucleic acids can be used to make DNA-based logic gates. These devices are important modules in molecular computing and biosensing. The ideal logic gate system should provide a wide selection of logical operations, and be integrable in multiple copies into more complex structures. Here we show the successful construction of a small DNA-based logic gate complex that produces fluorescent outputs corresponding to the operation of the six Boolean logic gates AND, NAND, OR, NOR, XOR, and XNOR. The logic gate complex is shown to work also when implemented in a three-dimensional DNA origami box structure, where it controlled the position of the lid in a closed or open position. Implementation of multiple microRNA sensitive DNA locks on one DNA origami box structure enabled fuzzy logical operation that allows biosensing of complex molecular signals. Integrating logic gates with DNA origami systems opens a vast avenue to applications in the fields of nanomedicine for diagnostics and therapeutics. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  16. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  17. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  18. Impact of refining the assessment of dietary exposure to cadmium in the European adult population.

    PubMed

    Ferrari, Pietro; Arcella, Davide; Heraud, Fanny; Cappé, Stefano; Fabiansson, Stefan

    2013-01-01

    Exposure assessment constitutes an important step in any risk assessment of potentially harmful substances present in food. The European Food Safety Authority (EFSA) first assessed dietary exposure to cadmium in Europe using a deterministic framework, resulting in mean values of exposure in the range of health-based guidance values. Since then, the characterisation of foods has been refined to better match occurrence and consumption data, and a new strategy to handle left-censoring in occurrence data was devised. A probabilistic assessment was performed and compared with deterministic estimates, using occurrence values at the European level and consumption data from 14 national dietary surveys. Mean estimates in the probabilistic assessment ranged from 1.38 (95% CI = 1.35-1.44) to 2.08 (1.99-2.23) µg kg⁻¹ bodyweight (bw) week⁻¹ across the different surveys, which were less than 10% lower than deterministic (middle bound) mean values that ranged from 1.50 to 2.20 µg kg⁻¹ bw week⁻¹. Probabilistic 95th percentile estimates of dietary exposure ranged from 2.65 (2.57-2.72) to 4.99 (4.62-5.38) µg kg⁻¹ bw week⁻¹, which were, with the exception of one survey, between 3% and 17% higher than middle-bound deterministic estimates. Overall, the proportion of subjects exceeding the tolerable weekly intake of 2.5 µg kg⁻¹ bw ranged from 14.8% (13.6-16.0%) to 31.2% (29.7-32.5%) according to the probabilistic assessment. The results of this work indicate that mean values of dietary exposure to cadmium in the European population were of similar magnitude using determinist or probabilistic assessments. For higher exposure levels, probabilistic estimates were almost consistently larger than deterministic counterparts, thus reflecting the impact of using the full distribution of occurrence values to determine exposure levels. It is considered prudent to use probabilistic methodology should exposure estimates be close to or exceeding health-based guidance values.

  19. Cooperation Among Theorem Provers

    NASA Technical Reports Server (NTRS)

    Waldinger, Richard J.

    1998-01-01

    This is a final report, which supports NASA's PECSEE (Persistent Cognizant Software Engineering Environment) effort and complements the Kestrel Institute project "Inference System Integration via Logic Morphism". The ultimate purpose of the project is to develop a superior logical inference mechanism by combining the diverse abilities of multiple cooperating theorem provers. In many years of research, a number of powerful theorem-proving systems have arisen with differing capabilities and strengths. Resolution theorem provers (such as Kestrel's KITP or SRI's, SNARK) deal with first-order logic with equality but not the principle of mathematical induction. The Boyer-Moore theorem prover excels at proof by induction but cannot deal with full first-order logic. Both are highly automated but cannot accept user guidance easily. The PVS system (from SRI) in only automatic within decidable theories, but it has well-designed interactive capabilities: furthermore, it includes higher-order logic, not just first-order logic. The NuPRL system from Cornell University and the STeP system from Stanford University have facilities for constructive logic and temporal logic, respectively - both are interactive. It is often suggested - for example, in the anonymous "QED Manifesto"-that we should pool the resources of all these theorem provers into a single system, so that the strengths of one can compensate for the weaknesses of others, and so that effort will not be duplicated. However, there is no straightforward way of doing this, because each system relies on its own language and logic for its success. Thus. SNARK uses ordinary first-order logic with equality, PVS uses higher-order logic. and NuPRL uses constructive logic. The purpose of this project, and the companion project at Kestrel, has been to use the category-theoretic notion of logic morphism to combine systems with different logics and languages. Kestrel's SPECWARE system has been the vehicle for the implementation.

  20. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  1. EAGLE can do Efficient LTL Monitoring

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We briefly present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. In this paper we show how EAGLE can do linear temporal logic (LTL) monitoring in an efficient way. We give an upper bound on the space and time complexity of this monitoring.

  2. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.

  3. Nonmonotonic Logic for Use in Information Retrieval: An Exploratory Paper.

    ERIC Educational Resources Information Center

    Hurt, C. D.

    1998-01-01

    Monotonic logic requires reexamination of the entire logic string when there is a contradiction. Nonmonotonic logic allows the user to withdraw conclusions in the face of contradiction without harm to the logic string, which has considerable application to the field of information searching. Artificial intelligence models and neural networks based…

  4. Fuzzy Logic Engine

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna

    2005-01-01

    The Fuzzy Logic Engine is a software package that enables users to embed fuzzy-logic modules into their application programs. Fuzzy logic is useful as a means of formulating human expert knowledge and translating it into software to solve problems. Fuzzy logic provides flexibility for modeling relationships between input and output information and is distinguished by its robustness with respect to noise and variations in system parameters. In addition, linguistic fuzzy sets and conditional statements allow systems to make decisions based on imprecise and incomplete information. The user of the Fuzzy Logic Engine need not be an expert in fuzzy logic: it suffices to have a basic understanding of how linguistic rules can be applied to the user's problem. The Fuzzy Logic Engine is divided into two modules: (1) a graphical-interface software tool for creating linguistic fuzzy sets and conditional statements and (2) a fuzzy-logic software library for embedding fuzzy processing capability into current application programs. The graphical- interface tool was developed using the Tcl/Tk programming language. The fuzzy-logic software library was written in the C programming language.

  5. Formal Logic and Flowchart for Diagnosis Validity Verification and Inclusion in Clinical Decision Support Systems

    NASA Astrophysics Data System (ADS)

    Sosa, M.; Grundel, L.; Simini, F.

    2016-04-01

    Logical reasoning is part of medical practice since its origins. Modern Medicine has included information-intensive tools to refine diagnostics and treatment protocols. We are introducing formal logic teaching in Medical School prior to Clinical Internship, to foster medical practice. Two simple examples (Acute Myocardial Infarction and Diabetes Mellitus) are given in terms of formal logic expression and truth tables. Flowcharts of both diagnostic processes help understand the procedures and to validate them logically. The particularity of medical information is that it is often accompanied by “missing data” which suggests to adapt formal logic to a “three state” logic in the future. Medical Education must include formal logic to understand complex protocols and best practices, prone to mutual interactions.

  6. Reprogrammable Logic Gate and Logic Circuit Based on Multistimuli-Responsive Raspberry-like Micromotors.

    PubMed

    Zhang, Lina; Zhang, Hui; Liu, Mei; Dong, Bin

    2016-06-22

    In this paper, we report a polymer-based raspberry-like micromotor. Interestingly, the resulting micromotor exhibits multistimuli-responsive motion behavior. Its on-off-on motion can be regulated by the application of stimuli such as H2O2, near-infrared light, NH3, or their combinations. Because of the versatility in motion control, the current micromotor has great potential in the application field of logic gate and logic circuit. With use of different stimuli as the inputs and the micromotor motion as the output, reprogrammable OR and INHIBIT logic gates or logic circuit consisting of OR, NOT, and AND logic gates can be achieved.

  7. Noise-Aided Logic in an Electronic Analog of Synthetic Genetic Networks

    PubMed Central

    Hellen, Edward H.; Dana, Syamal K.; Kurths, Jürgen; Kehler, Elizabeth; Sinha, Sudeshna

    2013-01-01

    We report the experimental verification of noise-enhanced logic behaviour in an electronic analog of a synthetic genetic network, composed of two repressors and two constitutive promoters. We observe good agreement between circuit measurements and numerical prediction, with the circuit allowing for robust logic operations in an optimal window of noise. Namely, the input-output characteristics of a logic gate is reproduced faithfully under moderate noise, which is a manifestation of the phenomenon known as Logical Stochastic Resonance. The two dynamical variables in the system yield complementary logic behaviour simultaneously. The system is easily morphed from AND/NAND to OR/NOR logic. PMID:24124531

  8. Quantum design rules for single molecule logic gates.

    PubMed

    Renaud, N; Hliwa, M; Joachim, C

    2011-08-28

    Recent publications have demonstrated how to implement a NOR logic gate with a single molecule using its interaction with two surface atoms as logical inputs [W. Soe et al., ACS Nano, 2011, 5, 1436]. We demonstrate here how this NOR logic gate belongs to the general family of quantum logic gates where the Boolean truth table results from a full control of the quantum trajectory of the electron transfer process through the molecule by very local and classical inputs practiced on the molecule. A new molecule OR gate is proposed for the logical inputs to be also single metal atoms, one per logical input.

  9. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Reversible logic elements as a new field of application of optical solitons

    NASA Astrophysics Data System (ADS)

    Maimistov, Andrei I.

    1995-10-01

    An analysis is made of the fundamental concepts of conservative logic. It is shown that the existing optical soliton switches can be converted into logic gates which act as conservative logic elements. A logic device of this type, based on a nonlinear fibre-optic directional coupler, is considered. Polarised solitons are used in this coupler. This use of solitons leads in a natural way to the desirability of developing conservative triple-valued logic.

  10. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy logical relationships.

    PubMed

    Chen, Shyi-Ming; Chen, Shen-Wen

    2015-03-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.

  11. DEMONSTRATION BULLETIN: THE ECO LOGIC THERMAL DESORPTION UNIT - MIDDLEGROUND LANDFILL - BAY CITY, MI - ELI ECO LOGIC INTERNATIONAL, INC.

    EPA Science Inventory

    ECO Logic has developed a thermal desorption unit 0"DU) for the treatment of soils contaminated with hazardous organic contaminants. This TDU has been designed to be used in conjunction with Eco Logic's patented gas-phase chemical reduction reactor. The Eco Logic reactor is the s...

  12. Aspen succession in the Intermountain West: A deterministic model

    Treesearch

    Dale L. Bartos; Frederick R. Ward; George S. Innis

    1983-01-01

    A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...

  13. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  14. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2016-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  15. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  16. Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers

    DOE PAGES

    Saha, Sourabh K.; Culpepper, Martin L.

    2016-04-25

    Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.

  17. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  18. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2015-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  19. Probabilistic direct counterfactual quantum communication

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng

    2017-02-01

    It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).

  20. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  1. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    PubMed

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  2. Integrated deterministic and probabilistic safety analysis for safety assessment of nuclear power plants

    DOE PAGES

    Di Maio, Francesco; Zio, Enrico; Smith, Curtis; ...

    2015-07-06

    The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less

  3. "Glitch Logic" and Applications to Computing and Information Security

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Katkoori, Srinivas

    2009-01-01

    This paper introduces a new method of information processing in digital systems, and discusses its potential benefits to computing and information security. The new method exploits glitches caused by delays in logic circuits for carrying and processing information. Glitch processing is hidden to conventional logic analyses and undetectable by traditional reverse engineering techniques. It enables the creation of new logic design methods that allow for an additional controllable "glitch logic" processing layer embedded into a conventional synchronous digital circuits as a hidden/covert information flow channel. The combination of synchronous logic with specific glitch logic design acting as an additional computing channel reduces the number of equivalent logic designs resulting from synthesis, thus implicitly reducing the possibility of modification and/or tampering with the design. The hidden information channel produced by the glitch logic can be used: 1) for covert computing/communication, 2) to prevent reverse engineering, tampering, and alteration of design, and 3) to act as a channel for information infiltration/exfiltration and propagation of viruses/spyware/Trojan horses.

  4. Identifying a largest logical plane from a plurality of logical planes formed of compute nodes of a subcommunicator in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Kristan D.; Faraj, Daniel A.

    In a parallel computer, a largest logical plane from a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: identifying, by each compute node of the subcommunicator, all logical planes that include the compute node; calculating, by each compute node for each identified logical plane that includes the compute node, an area of the identified logical plane; initiating, by a root node of the subcommunicator, a gather operation; receiving, by the root node from each compute node of the subcommunicator, each node's calculated areas as contribution data to the gather operation; and identifying, bymore » the root node in dependence upon the received calculated areas, a logical plane of the subcommunicator having the greatest area.« less

  5. Design of a Ferroelectric Programmable Logic Gate Array

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat Duen

    2003-01-01

    A programmable logic gate array has been designed utilizing ferroelectric field effect transistors. The design has only a small number of gates, but this could be scaled up to a more useful size. Using FFET's in a logic array gives several advantages. First, it allows real-time programmability to the array to give high speed reconfiguration. It also allows the array to be configured nearly an unlimited number of times, unlike a FLASH FPGA. Finally, the Ferroelectric Programmable Logic Gate Array (FPLGA) can be implemented using a smaller number of transistors because of the inherent logic characteristics of an FFET. The device was only designed and modeled using Spice models of the circuit, including the FFET. The actual device was not produced. The design consists of a small array of NAND and NOR logic gates. Other gates could easily be produced. They are linked by FFET's that control the logic flow. Timing and logic tables have been produced showing the array can produce a variety of logic combinations at a real time usable speed. This device could be a prototype for a device that could be put into imbedded systems that need the high speed of hardware implementation of logic and the complexity to need to change the logic algorithm. Because of the non-volatile nature of the FFET, it would also be useful in situations that needed to program a logic array once and use it repeatedly after the power has been shut off.

  6. Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y.; Zhong, Y. P.; Deng, Y. F.

    2013-12-21

    Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.

  7. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2016-11-15

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.

  8. Automating Access Control Logics in Simple Type Theory with LEO-II

    NASA Astrophysics Data System (ADS)

    Benzmüller, Christoph

    Garg and Abadi recently proved that prominent access control logics can be translated in a sound and complete way into modal logic S4. We have previously outlined how normal multimodal logics, including monomodal logics K and S4, can be embedded in simple type theory and we have demonstrated that the higher-order theorem prover LEO-II can automate reasoning in and about them. In this paper we combine these results and describe a sound (and complete) embedding of different access control logics in simple type theory. Employing this framework we show that the off the shelf theorem prover LEO-II can be applied to automate reasoning in and about prominent access control logics.

  9. Boolean Logic Tree of Label-Free Dual-Signal Electrochemical Aptasensor System for Biosensing, Three-State Logic Computation, and Keypad Lock Security Operation.

    PubMed

    Lu, Jiao Yang; Zhang, Xin Xing; Huang, Wei Tao; Zhu, Qiu Yan; Ding, Xue Zhi; Xia, Li Qiu; Luo, Hong Qun; Li, Nian Bing

    2017-09-19

    The most serious and yet unsolved problems of molecular logic computing consist in how to connect molecular events in complex systems into a usable device with specific functions and how to selectively control branchy logic processes from the cascading logic systems. This report demonstrates that a Boolean logic tree is utilized to organize and connect "plug and play" chemical events DNA, nanomaterials, organic dye, biomolecule, and denaturant for developing the dual-signal electrochemical evolution aptasensor system with good resettability for amplification detection of thrombin, controllable and selectable three-state logic computation, and keypad lock security operation. The aptasensor system combines the merits of DNA-functionalized nanoamplification architecture and simple dual-signal electroactive dye brilliant cresyl blue for sensitive and selective detection of thrombin with a wide linear response range of 0.02-100 nM and a detection limit of 1.92 pM. By using these aforementioned chemical events as inputs and the differential pulse voltammetry current changes at different voltages as dual outputs, a resettable three-input biomolecular keypad lock based on sequential logic is established. Moreover, the first example of controllable and selectable three-state molecular logic computation with active-high and active-low logic functions can be implemented and allows the output ports to assume a high impediment or nothing (Z) state in addition to the 0 and 1 logic levels, effectively controlling subsequent branchy logic computation processes. Our approach is helpful in developing the advanced controllable and selectable logic computing and sensing system in large-scale integration circuits for application in biomedical engineering, intelligent sensing, and control.

  10. Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.

    PubMed

    Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui

    2017-05-25

    Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.

  11. Gate-Controlled BP-WSe2 Heterojunction Diode for Logic Rectifiers and Logic Optoelectronics.

    PubMed

    Li, Dong; Wang, Biao; Chen, Mingyuan; Zhou, Jun; Zhang, Zengxing

    2017-06-01

    p-n junctions play an important role in modern semiconductor electronics and optoelectronics, and field-effect transistors are often used for logic circuits. Here, gate-controlled logic rectifiers and logic optoelectronic devices based on stacked black phosphorus (BP) and tungsten diselenide (WSe 2 ) heterojunctions are reported. The gate-tunable ambipolar charge carriers in BP and WSe 2 enable a flexible, dynamic, and wide modulation on the heterojunctions as isotype (p-p and n-n) and anisotype (p-n) diodes, which exhibit disparate rectifying and photovoltaic properties. Based on such characteristics, it is demonstrated that BP-WSe 2 heterojunction diodes can be developed for high-performance logic rectifiers and logic optoelectronic devices. Logic optoelectronic devices can convert a light signal to an electric one by applied gate voltages. This work should be helpful to expand the applications of 2D crystals. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Interconnect-free parallel logic circuits in a single mechanical resonator

    PubMed Central

    Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.

    2011-01-01

    In conventional computers, wiring between transistors is required to enable the execution of Boolean logic functions. This has resulted in processors in which billions of transistors are physically interconnected, which limits integration densities, gives rise to huge power consumption and restricts processing speeds. A method to eliminate wiring amongst transistors by condensing Boolean logic into a single active element is thus highly desirable. Here, we demonstrate a novel logic architecture using only a single electromechanical parametric resonator into which multiple channels of binary information are encoded as mechanical oscillations at different frequencies. The parametric resonator can mix these channels, resulting in new mechanical oscillation states that enable the construction of AND, OR and XOR logic gates as well as multibit logic circuits. Moreover, the mechanical logic gates and circuits can be executed simultaneously, giving rise to the prospect of a parallel logic processor in just a single mechanical resonator. PMID:21326230

  13. Interconnect-free parallel logic circuits in a single mechanical resonator.

    PubMed

    Mahboob, I; Flurin, E; Nishiguchi, K; Fujiwara, A; Yamaguchi, H

    2011-02-15

    In conventional computers, wiring between transistors is required to enable the execution of Boolean logic functions. This has resulted in processors in which billions of transistors are physically interconnected, which limits integration densities, gives rise to huge power consumption and restricts processing speeds. A method to eliminate wiring amongst transistors by condensing Boolean logic into a single active element is thus highly desirable. Here, we demonstrate a novel logic architecture using only a single electromechanical parametric resonator into which multiple channels of binary information are encoded as mechanical oscillations at different frequencies. The parametric resonator can mix these channels, resulting in new mechanical oscillation states that enable the construction of AND, OR and XOR logic gates as well as multibit logic circuits. Moreover, the mechanical logic gates and circuits can be executed simultaneously, giving rise to the prospect of a parallel logic processor in just a single mechanical resonator.

  14. Tyramine Hydrochloride Based Label-Free System for Operating Various DNA Logic Gates and a DNA Caliper for Base Number Measurements.

    PubMed

    Fan, Daoqing; Zhu, Xiaoqing; Dong, Shaojun; Wang, Erkang

    2017-07-05

    DNA is believed to be a promising candidate for molecular logic computation, and the fluorogenic/colorimetric substrates of G-quadruplex DNAzyme (G4zyme) are broadly used as label-free output reporters of DNA logic circuits. Herein, for the first time, tyramine-HCl (a fluorogenic substrate of G4zyme) is applied to DNA logic computation and a series of label-free DNA-input logic gates, including elementary AND, OR, and INHIBIT logic gates, as well as a two to one encoder, are constructed. Furthermore, a DNA caliper that can measure the base number of target DNA as low as three bases is also fabricated. This DNA caliper can also perform concatenated AND-AND logic computation to fulfil the requirements of sophisticated logic computing. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Constructing a logical, regular axis topology from an irregular topology

    DOEpatents

    Faraj, Daniel A.

    2014-07-22

    Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.

  16. Constructing a logical, regular axis topology from an irregular topology

    DOEpatents

    Faraj, Daniel A.

    2014-07-01

    Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.

  17. Generalized Majority Logic Criterion to Analyze the Statistical Strength of S-Boxes

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-05-01

    The majority logic criterion is applicable in the evaluation process of substitution boxes used in the advanced encryption standard (AES). The performance of modified or advanced substitution boxes is predicted by processing the results of statistical analysis by the majority logic criteria. In this paper, we use the majority logic criteria to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, the majority logic criterion is applied to AES, affine power affine (APA), Gray, Lui J, residue prime, S8 AES, Skipjack, and Xyi substitution boxes. The majority logic criterion is further extended into a generalized majority logic criterion which has a broader spectrum of analyzing the effectiveness of substitution boxes in image encryption applications. The integral components of the statistical analyses used for the generalized majority logic criterion are derived from results of entropy analysis, contrast analysis, correlation analysis, homogeneity analysis, energy analysis, and mean of absolute deviation (MAD) analysis.

  18. Fuzzy branching temporal logic.

    PubMed

    Moon, Seong-ick; Lee, Kwang H; Lee, Doheon

    2004-04-01

    Intelligent systems require a systematic way to represent and handle temporal information containing uncertainty. In particular, a logical framework is needed that can represent uncertain temporal information and its relationships with logical formulae. Fuzzy linear temporal logic (FLTL), a generalization of propositional linear temporal logic (PLTL) with fuzzy temporal events and fuzzy temporal states defined on a linear time model, was previously proposed for this purpose. However, many systems are best represented by branching time models in which each state can have more than one possible future path. In this paper, fuzzy branching temporal logic (FBTL) is proposed to address this problem. FBTL adopts and generalizes concurrent tree logic (CTL*), which is a classical branching temporal logic. The temporal model of FBTL is capable of representing fuzzy temporal events and fuzzy temporal states, and the order relation among them is represented as a directed graph. The utility of FBTL is demonstrated using a fuzzy job shop scheduling problem as an example.

  19. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  20. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  1. Efficient Algorithms for Handling Nondeterministic Automata

    NASA Astrophysics Data System (ADS)

    Vojnar, Tomáš

    Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.

  2. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Crump, Alex R.; Resch, Charles T.

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less

  3. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  4. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  5. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  6. Magnetic tunnel junction based spintronic logic devices

    NASA Astrophysics Data System (ADS)

    Lyle, Andrew Paul

    The International Technology Roadmap for Semiconductors (ITRS) predicts that complimentary metal oxide semiconductor (CMOS) based technologies will hit their last generation on or near the 16 nm node, which we expect to reach by the year 2025. Thus future advances in computational power will not be realized from ever-shrinking device sizes, but rather by 'outside the box' designs and new physics, including molecular or DNA based computation, organics, magnonics, or spintronic. This dissertation investigates magnetic logic devices for post-CMOS computation. Three different architectures were studied, each relying on a different magnetic mechanism to compute logic functions. Each design has it benefits and challenges that must be overcome. This dissertation focuses on pushing each design from the drawing board to a realistic logic technology. The first logic architecture is based on electrically connected magnetic tunnel junctions (MTJs) that allow direct communication between elements without intermediate sensing amplifiers. Two and three input logic gates, which consist of two and three MTJs connected in parallel, respectively were fabricated and are compared. The direct communication is realized by electrically connecting the output in series with the input and applying voltage across the series connections. The logic gates rely on the fact that a change in resistance at the input modulates the voltage that is needed to supply the critical current for spin transfer torque switching the output. The change in resistance at the input resulted in a voltage margin of 50--200 mV and 250--300 mV for the closest input states for the three and two input designs, respectively. The two input logic gate realizes the AND, NAND, NOR, and OR logic functions. The three input logic function realizes the Majority, AND, NAND, NOR, and OR logic operations. The second logic architecture utilizes magnetostatically coupled nanomagnets to compute logic functions, which is the basis of Magnetic Quantum Cellular Automata (MQCA). MQCA has the potential to be thousands of times more energy efficient than CMOS technology. While interesting, these systems are academic unless they can be interfaced into current technologies. This dissertation pushed past a major hurdle by experimentally demonstrating a spintronic input/output (I/O) interface for the magnetostatically coupled nanomagnets by incorporating MTJs. This spintronic interface allows individual nanomagnets to be programmed using spin transfer torque and read using magneto resistance structure. Additionally the spintronic interface allows statistical data on the reliability of the magnetic coupling utilized for data propagation to be easily measured. The integration of spintronics and MQCA for an electrical interface to achieve a magnetic logic device with low power creates a competitive post-CMOS logic device. The final logic architecture that was studied used MTJs to compute logic functions and magnetic domain walls to communicate between gates. Simulations were used to optimize the design of this architecture. Spin transfer torque was used to compute logic function at each MTJ gate and was used to drive the domain walls. The design demonstrated that multiple nanochannels could be connected to each MTJ to realize fan-out from the logic gates. As a result this logic scheme eliminates the need for intermediate reads and conversions to pass information from one logic gate to another.

  7. A Classification of Designated Logic Systems

    DTIC Science & Technology

    1988-02-01

    Introduction to Logic. New York: Macmillan Publishing, 1972. Grimaldi , Ralph P . Discrete and Combinatorial Mathematics. Reading: Addison-Wesley...sound basis for understanding non-classical logic systems. I would like to thank the Air Force Institute of Technology for funding this research. vi p ...6 p ILLUSTRATIONS Figure Page 1. Two Classifications of Designated Logic Systems 13 2. Two Partitions of Two-valued Logic Systems 14 3. Two

  8. Counter Unmanned Aerial System Decision-Aid Logic Process (C-UAS DALP)

    DTIC Science & Technology

    decision -aid or logic process that bridges the middle elements of the kill... of use, location, general logic process , and reference mission. This is the framework for the IDEF0 functional architecture diagrams, decision -aid diagrams, logic process , and modeling and simulation....chain between detection to countermeasure response. This capstone project creates the logic for a decision process that transitions from the

  9. A Conditional Criterion for Identity, Leading to a Fourth Law of Logic

    DTIC Science & Technology

    1979-06-01

    Identify by block number) Aristotle, Aristotlean logic, axiom, axioms of logic, change, Charles Muses, chronotopology, collapse of the wave function...of perception, merely accounting for the spatial aspects. In other words, Aristotlean logic is a synthesis of primitive observation, which has been...parameter, not an observable. Hence measurement/detection (observ- ables)deal with primitive observation and Aristotlean logic (topology), while total

  10. Fuzzy Logic and Education: Teaching the Basics of Fuzzy Logic through an Example (By Way of Cycling)

    ERIC Educational Resources Information Center

    Sobrino, Alejandro

    2013-01-01

    Fuzzy logic dates back to 1965 and it is related not only to current areas of knowledge, such as Control Theory and Computer Science, but also to traditional ones, such as Philosophy and Linguistics. Like any logic, fuzzy logic is concerned with argumentation, but unlike other modalities, which focus on the crisp reasoning of Mathematics, it deals…

  11. A stochastic model for correlated protein motions

    NASA Astrophysics Data System (ADS)

    Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem

    2006-06-01

    A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.

  12. Values in Science: Making Sense of Biology Doctoral Students' Critical Examination of a Deterministic Claim in a Media Article

    ERIC Educational Resources Information Center

    Raveendran, Aswathy; Chunawala, Sugra

    2015-01-01

    Several educators have emphasized that students need to understand science as a human endeavor that is not value free. In the exploratory study reported here, we investigated how doctoral students of biology understand the intersection of values and science in the context of genetic determinism. Deterministic research claims have been critiqued…

  13. The dual reading of general conditionals: The influence of abstract versus concrete contexts.

    PubMed

    Wang, Moyun; Yao, Xinyun

    2018-04-01

    A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.

  14. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  15. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  16. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  17. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  18. An overview of engineering concepts and current design algorithms for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Duffy, S. F.; Hu, J.; Hopkins, D. A.

    1995-01-01

    The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.

  19. Implementation speed of deterministic population passages compared to that of Rabi pulses

    NASA Astrophysics Data System (ADS)

    Chen, Jingwei; Wei, L. F.

    2015-02-01

    Fast Rabi π -pulse technique has been widely applied to various coherent quantum manipulations, although it requires precise designs of the pulse areas. Relaxing the precise pulse designs, various rapid adiabatic passage (RAP) approaches have been alternatively utilized to implement various population passages deterministically. However, the usual RAP protocol could not be implemented desirably fast, as the relevant adiabatic condition should be robustly satisfied during the passage. Here, we propose a modified shortcut to adiabaticity (STA) technique to accelerate significantly the desired deterministic quantum state population passages. This transitionless technique is beyond the usual rotating wave approximation (RWA) performed in the recent STA protocols, and thus can be applied to deliver various fast quantum evolutions wherein the relevant counter-rotating effects cannot be neglected. The proposal is demonstrated specifically with the driven two- and three-level systems. Numerical results show that with the present STA technique beyond the RWA the usual Stark-chirped RAPs and stimulated Raman adiabatic passages could be significantly speeded up; the deterministic population passages could be implemented as fast as the widely used fast Rabi π pulses, but are insensitive to the applied pulse areas.

  20. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    NASA Astrophysics Data System (ADS)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  1. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  2. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A; Heidelberger, Philip

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less

  3. Feasible logic Bell-state analysis with linear optics

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state. PMID:26877208

  4. Feasible logic Bell-state analysis with linear optics.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-02-15

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state.

  5. Evidence that logical reasoning depends on conscious processing.

    PubMed

    DeWall, C Nathan; Baumeister, Roy F; Masicampo, E J

    2008-09-01

    Humans, unlike other animals, are equipped with a powerful brain that permits conscious awareness and reflection. A growing trend in psychological science has questioned the benefits of consciousness, however. Testing a hypothesis advanced by [Lieberman, M. D., Gaunt, R., Gilbert, D. T., & Trope, Y. (2002). Reflection and reflexion: A social cognitive neuroscience approach to attributional inference. Advances in Experimental Social Psychology, 34, 199-249], four studies suggested that the conscious, reflective processing system is vital for logical reasoning. Substantial decrements in logical reasoning were found when a cognitive load manipulation preoccupied conscious processing, while hampering the nonconscious system with consciously suppressed thoughts failed to impair reasoning (Experiment 1). Nonconscious activation (priming) of the idea of logical reasoning increased the activation of logic-relevant concepts, but failed to improve logical reasoning performance (Experiments 2a-2c) unless the logical conclusions were largely intuitive and thus not reliant on logical reasoning (Experiment 3). Meanwhile, stimulating the conscious goal of reasoning well led to improvements in reasoning performance (Experiment 4). These findings offer evidence that logical reasoning is aided by the conscious, reflective processing system.

  6. Enzyme-based logic gates and circuits-analytical applications and interfacing with electronics.

    PubMed

    Katz, Evgeny; Poghossian, Arshak; Schöning, Michael J

    2017-01-01

    The paper is an overview of enzyme-based logic gates and their short circuits, with specific examples of Boolean AND and OR gates, and concatenated logic gates composed of multi-step enzyme-biocatalyzed reactions. Noise formation in the biocatalytic reactions and its decrease by adding a "filter" system, converting convex to sigmoid response function, are discussed. Despite the fact that the enzyme-based logic gates are primarily considered as components of future biomolecular computing systems, their biosensing applications are promising for immediate practical use. Analytical use of the enzyme logic systems in biomedical and forensic applications is discussed and exemplified with the logic analysis of biomarkers of various injuries, e.g., liver injury, and with analysis of biomarkers characteristic of different ethnicity found in blood samples on a crime scene. Interfacing of enzyme logic systems with modified electrodes and semiconductor devices is discussed, giving particular attention to the interfaces functionalized with signal-responsive materials. Future perspectives in the design of the biomolecular logic systems and their applications are discussed in the conclusion. Graphical Abstract Various applications and signal-transduction methods are reviewed for enzyme-based logic systems.

  7. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    NASA Astrophysics Data System (ADS)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  8. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  9. Observations, theoretical ideas and modeling of turbulent flows: Past, present and future

    NASA Technical Reports Server (NTRS)

    Chapman, G. T.; Tobak, M.

    1985-01-01

    Turbulence was analyzed in a historical context featuring the interactions between observations, theoretical ideas, and modeling within three successive movements. These are identified as predominantly statistical, structural and deterministic. The statistical movement is criticized for its failure to deal with the structural elements observed in turbulent flows. The structural movement is criticized for its failure to embody observed structural elements within a formal theory. The deterministic movement is described as having the potential of overcoming these deficiencies by allowing structural elements to exhibit chaotic behavior that is nevertheless embodied within a theory. Four major ideas of this movement are described: bifurcation theory, strange attractors, fractals, and the renormalization group. A framework for the future study of turbulent flows is proposed, based on the premises of the deterministic movement.

  10. An interval logic for higher-level temporal reasoning

    NASA Technical Reports Server (NTRS)

    Schwartz, R. L.; Melliar-Smith, P. M.; Vogt, F. H.; Plaisted, D. A.

    1983-01-01

    Prior work explored temporal logics, based on classical modal logics, as a framework for specifying and reasoning about concurrent programs, distributed systems, and communications protocols, and reported on efforts using temporal reasoning primitives to express very high level abstract requirements that a program or system is to satisfy. Based on experience with those primitives, this report describes an Interval Logic that is more suitable for expressing such higher level temporal properties. The report provides a formal semantics for the Interval Logic, and several examples of its use. A description of decision procedures for the logic is also included.

  11. An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem

    NASA Technical Reports Server (NTRS)

    Hosheleva, Olga

    1997-01-01

    How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.

  12. An Institutional Perspective on Accountable Care Organizations.

    PubMed

    Goodrick, Elizabeth; Reay, Trish

    2016-12-01

    We employ aspects of institutional theory to explore how Accountable Care Organizations (ACOs) can effectively manage the multiplicity of ideas and pressures within which they are embedded and consequently better serve patients and their communities. More specifically, we draw on the concept of institutional logics to highlight the importance of understanding the conflicting principles upon which ACOs were founded. Based on previous research conducted both inside and outside health care settings, we argue that ACOs can combine attention to these principles (or institutional logics) in different ways; the options fall on a continuum from (a) segregating the effects of multiple logics from each other by compartmentalizing responses to multiple logics to (b) fully hybridizing the different logics. We suggest that the most productive path for ACOs is to situate their approach between the two extremes of "segregating" and "fully hybridizing." This strategic approach allows ACOs to develop effective responses that combine logics without fully integrating them. We identify three ways that ACOs can embrace institutional complexity short of fully hybridizing disparate logics: (1) reinterpreting practices to make them compatible with other logics; (2) engaging in strategies that take advantage of existing synergy between conflicting logics; (3) creating opportunities for people at frontline to develop innovative ways of working that combine multiple logics. © The Author(s) 2016.

  13. An electrically reconfigurable logic gate intrinsically enabled by spin-orbit materials.

    PubMed

    Kazemi, Mohammad

    2017-11-10

    The spin degree of freedom in magnetic devices has been discussed widely for computing, since it could significantly reduce energy dissipation, might enable beyond Von Neumann computing, and could have applications in quantum computing. For spin-based computing to become widespread, however, energy efficient logic gates comprising as few devices as possible are required. Considerable recent progress has been reported in this area. However, proposals for spin-based logic either require ancillary charge-based devices and circuits in each individual gate or adopt principals underlying charge-based computing by employing ancillary spin-based devices, which largely negates possible advantages. Here, we show that spin-orbit materials possess an intrinsic basis for the execution of logic operations. We present a spin-orbit logic gate that performs a universal logic operation utilizing the minimum possible number of devices, that is, the essential devices required for representing the logic operands. Also, whereas the previous proposals for spin-based logic require extra devices in each individual gate to provide reconfigurability, the proposed gate is 'electrically' reconfigurable at run-time simply by setting the amplitude of the clock pulse applied to the gate. We demonstrate, analytically and numerically with experimentally benchmarked models, that the gate performs logic operations and simultaneously stores the result, realizing the 'stateful' spin-based logic scalable to ultralow energy dissipation.

  14. Adequacy assessment of composite generation and transmission systems incorporating wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Gao, Yi

    The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.

  15. OncoLogicTM

    EPA Science Inventory

    OncoLogicTM - A Computer System to Evaluate the Carcinogenic Potential of Chemicals
    OncoLogicTM is a software program that evaluates the likelihood that a chemical may cause cancer. OncoLogicTM has been peer reviewed and is being rele...

  16. All optical programmable logic array (PLA)

    NASA Astrophysics Data System (ADS)

    Hiluf, Dawit

    2018-03-01

    A programmable logic array (PLA) is an integrated circuit (IC) logic device that can be reconfigured to implement various kinds of combinational logic circuits. The device has a number of AND and OR gates which are linked together to give output or further combined with more gates or logic circuits. This work presents the realization of PLAs via the physics of a three level system interacting with light. A programmable logic array is designed such that a number of different logical functions can be combined as a sum-of-product or product-of-sum form. We present an all optical PLAs with the aid of laser light and observables of quantum systems, where encoded information can be considered as memory chip. The dynamics of the physical system is investigated using Lie algebra approach.

  17. Coupling induced logical stochastic resonance

    NASA Astrophysics Data System (ADS)

    Aravind, Manaoj; Murali, K.; Sinha, Sudeshna

    2018-06-01

    In this work we will demonstrate the following result: when we have two coupled bistable sub-systems, each driven separately by an external logic input signal, the coupled system yields outputs that can be mapped to specific logic gate operations in a robust manner, in an optimal window of noise. So, though the individual systems receive only one logic input each, due to the interplay of coupling, nonlinearity and noise, they cooperatively respond to give a logic output that is a function of both inputs. Thus the emergent collective response of the system, due to the inherent coupling, in the presence of a noise floor, maps consistently to that of logic outputs of the two inputs, a phenomenon we term coupling induced Logical Stochastic Resonance. Lastly, we demonstrate our idea in proof of principle circuit experiments.

  18. Avoiding Deontic Explosion by Contextually Restricting Aggregation

    NASA Astrophysics Data System (ADS)

    Meheus, Joke; Beirlaen, Mathieu; van de Putte, Frederik

    In this paper, we present an adaptive logic for deontic conflicts, called P2.1 r , that is based on Goble's logic SDL a P e - a bimodal extension of Goble's logic P that invalidates aggregation for all prima facie obligations. The logic P2.1 r has several advantages with respect to SDL a P e. For consistent sets of obligations it yields the same results as Standard Deontic Logic and for inconsistent sets of obligations, it validates aggregation "as much as possible". It thus leads to a richer consequence set than SDL a P e. The logic P2.1 r avoids Goble's criticisms against other non-adjunctive systems of deontic logic. Moreover, it can handle all the 'toy examples' from the literature as well as more complex ones.

  19. THRESHOLD LOGIC SYNTHESIS OF SEQUENTIAL MACHINES.

    DTIC Science & Technology

    The application of threshold logic to the design of sequential machines is the subject of this research. A single layer of threshold logic units in...advantages of fewer components because of the use of threshold logic, along with very high-speed operation resulting from the use of only a single layer of...logic. In some instances, namely for asynchronous machines, the only delay need be the natural delay of the single layer of threshold elements. It is

  20. Repressor logic modules assembled by rolling circle amplification platform to construct a set of logic gates

    PubMed Central

    Wei, Hua; Hu, Bo; Tang, Suming; Zhao, Guojie; Guan, Yifu

    2016-01-01

    Small molecule metabolites and their allosterically regulated repressors play an important role in many gene expression and metabolic disorder processes. These natural sensors, though valuable as good logic switches, have rarely been employed without transcription machinery in cells. Here, two pairs of repressors, which function in opposite ways, were cloned, purified and used to control DNA replication in rolling circle amplification (RCA) in vitro. By using metabolites and repressors as inputs, RCA signals as outputs, four basic logic modules were constructed successfully. To achieve various logic computations based on these basic modules, we designed series and parallel strategies of circular templates, which can further assemble these repressor modules in an RCA platform to realize twelve two-input Boolean logic gates and a three-input logic gate. The RCA-output and RCA-assembled platform was proved to be easy and flexible for complex logic processes and might have application potential in molecular computing and synthetic biology. PMID:27869177

  1. Logic reversibility and thermodynamic irreversibility demonstrated by DNAzyme-based Toffoli and Fredkin logic gates.

    PubMed

    Orbach, Ron; Remacle, Françoise; Levine, R D; Willner, Itamar

    2012-12-26

    The Toffoli and Fredkin gates were suggested as a means to exhibit logic reversibility and thereby reduce energy dissipation associated with logic operations in dense computing circuits. We present a construction of the logically reversible Toffoli and Fredkin gates by implementing a library of predesigned Mg(2+)-dependent DNAzymes and their respective substrates. Although the logical reversibility, for which each set of inputs uniquely correlates to a set of outputs, is demonstrated, the systems manifest thermodynamic irreversibility originating from two quite distinct and nonrelated phenomena. (i) The physical readout of the gates is by fluorescence that depletes the population of the final state of the machine. This irreversible, heat-releasing process is needed for the generation of the output. (ii) The DNAzyme-powered logic gates are made to operate at a finite rate by invoking downhill energy-releasing processes. Even though the three bits of Toffoli's and Fredkin's logically reversible gates manifest thermodynamic irreversibility, we suggest that these gates could have important practical implication in future nanomedicine.

  2. A biochemical logic gate using an enzyme and its inhibitor. Part II: The logic gate.

    PubMed

    Sivan, Sarit; Tuchman, Samuel; Lotan, Noah

    2003-06-01

    Enzyme-Based Logic Gates (ENLOGs) are key components in bio-molecular systems for information processing. This report and the previous one in this series address the characterization of two bio-molecular switching elements, namely the alpha-chymotrypsin (alphaCT) derivative p-phenylazobenzoyl-alpha-chymotrypsin (PABalphaCT) and its inhibitor (proflavine), as well as their assembly into a logic gate. The experimental output of the proposed system is expressed in terms of enzymic activity and this was translated into logic output (i.e. "1" or "0") relative to a predetermined threshold value. We have found that an univalent link exists between the dominant isomers of PABalphaCT (cis or trans), the dominant form of either acridine (proflavine) or acridan and the logic output of the system. Thus, of all possible combinations, only the trans-PABalphaCT and the acridan lead to an enzymic activity that can be defined as logic output "1". The system operates under the rules of Boolean algebra and performs as an "AND" logic gate.

  3. Logic regression and its extensions.

    PubMed

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Toward a phenomenology of trance logic in posttraumatic stress disorder.

    PubMed

    Beshai, J A

    2004-04-01

    Some induction procedures result in trance logic as an essential feature of hypnosis. Trance logic is a voluntary state of acceptance of suggestions without the critical evaluation that would destroy the validity of the meaningfulness of the suggestion. Induction procedures in real and simulated conditions induce a conflict between two contradictory messages in experimental hypnosis. In military induction the conflict is much more subtle involving society's need for security and its need for ethics. Such conflicts are often construed by the subject as trance logic. Trance logic provides an opportunity for therapists using the phenomenology of "presence" to deal with the objectified concepts of "avoidance," "numbing" implicit in this kind of dysfunctional thinking in Posttraumatic Stress Disorder. An individual phenomenology of induction procedures and suggestions, which trigger trance logic, may lead to a resolution of logical fallacies and recurring painful memories. It invites a reconciliation of conflicting messages implicit in phobias and avoidance traumas. Such a phenomenological analysis of trance logic may well be a novel approach to restructure the meaning of trauma.

  5. Control of electrochemical signals from quantum dots conjugated to organic materials by using DNA structure in an analog logic gate.

    PubMed

    Chen, Qi; Yoo, Si-Youl; Chung, Yong-Ho; Lee, Ji-Young; Min, Junhong; Choi, Jeong-Woo

    2016-10-01

    Various bio-logic gates have been studied intensively to overcome the rigidity of single-function silicon-based logic devices arising from combinations of various gates. Here, a simple control tool using electrochemical signals from quantum dots (QDs) was constructed using DNA and organic materials for multiple logic functions. The electrochemical redox current generated from QDs was controlled by the DNA structure. DNA structure, in turn, was dependent on the components (organic materials) and the input signal (pH). Independent electrochemical signals from two different logic units containing QDs were merged into a single analog-type logic gate, which was controlled by two inputs. We applied this electrochemical biodevice to a simple logic system and achieved various logic functions from the controlled pH input sets. This could be further improved by choosing QDs, ionic conditions, or DNA sequences. This research provides a feasible method for fabricating an artificial intelligence system. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Reproducibility in a multiprocessor system

    DOEpatents

    Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka

    2013-11-26

    Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.

  7. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  8. C code generation from Petri-net-based logic controller specification

    NASA Astrophysics Data System (ADS)

    Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei

    2017-08-01

    The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.

  9. Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.

    PubMed

    Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing

    2013-11-01

    The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.

  10. Verifying the Modal Logic Cube Is an Easy Task (For Higher-Order Automated Reasoners)

    NASA Astrophysics Data System (ADS)

    Benzmüller, Christoph

    Prominent logics, including quantified multimodal logics, can be elegantly embedded in simple type theory (classical higher-order logic). Furthermore, off-the-shelf reasoning systems for simple type type theory exist that can be uniformly employed for reasoning within and about embedded logics. In this paper we focus on reasoning about modal logics and exploit our framework for the automated verification of inclusion and equivalence relations between them. Related work has applied first-order automated theorem provers for the task. Our solution achieves significant improvements, most notably, with respect to elegance and simplicity of the problem encodings as well as with respect to automation performance.

  11. Size reduction techniques for vital compliant VHDL simulation models

    DOEpatents

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  12. Local rollback for fault-tolerance in parallel computing systems

    DOEpatents

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  13. The Quantum Logical Challenge: Peter Mittelstaedt's Contributions to Logic and Philosophy of Science

    NASA Astrophysics Data System (ADS)

    Beltrametti, E.; Dalla Chiara, M. L.; Giuntini, R.

    2017-12-01

    Peter Mittelstaedt's contributions to quantum logic and to the foundational problems of quantum theory have significantly realized the most authentic spirit of the International Quantum Structures Association: an original research about hard technical problems, which are often "entangled" with the emergence of important changes in our general world-conceptions. During a time where both the logical and the physical community often showed a skeptical attitude towards Birkhoff and von Neumann's quantum logic, Mittelstaedt brought into light the deeply innovating features of a quantum logical thinking that allows us to overcome some strong and unrealistic assumptions of classical logical arguments. Later on his intense research on the unsharp approach to quantum theory and to the measurement problem stimulated the increasing interest for unsharp forms of quantum logic, creating a fruitful interaction between the work of quantum logicians and of many-valued logicians. Mittelstaedt's general views about quantum logic and quantum theory seem to be inspired by a conjecture that is today more and more confirmed: there is something universal in the quantum theoretic formalism that goes beyond the limits of microphysics, giving rise to interesting applications to a number of different fields.

  14. Bouncing cosmologies via modified gravity in the ADM formalism: Application to loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    de Haro, Jaume; Amorós, Jaume

    2018-03-01

    We consider the Arnowitt-Deser-Misner formalism as a tool to build bouncing cosmologies. In this approach, the foliation of the spacetime has to be fixed in order to go beyond general relativity modifying the gravitational sector. Once a preferred slicing, which we choose based on the matter content of the Universe following the spirit of Weyl's postulate, has been fixed, f theories depending on the extrinsic and intrinsic curvature of the slicing are covariant for all the reference frames preserving the foliation; i.e., the constraint and dynamical equations have the same form for all these observers. Moreover, choosing multivalued f functions, bouncing backgrounds emerge in a natural way. In fact, the simplest is the one corresponding to holonomy corrected loop quantum cosmology. The final goal of this work is to provide the equations of perturbations which, unlike the full equations, become gauge invariant in this theory, and apply them to the so-called matter bounce scenario.

  15. Low-order modelling of a drop on a highly-hydrophobic substrate: statics and dynamics

    NASA Astrophysics Data System (ADS)

    Wray, Alexander W.; Matar, Omar K.; Davis, Stephen H.

    2017-11-01

    We analyse the behaviour of droplets resting on highly-hydrophobic substrates. This problem is of practical interest due to its appearance in many physical contexts involving the spreading, wetting, and dewetting of fluids on solid substrates. In mathematical terms, it exhibits an interesting challenge as the interface is multi-valued as a function of the natural Cartesian co-ordinates, presenting a stumbling block to typical low-order modelling techniques. Nonetheless, we show that in the static case, the interfacial shape is governed by the Young-Laplace equation, which may be solved explicitly in terms of elliptic functions. We present simple low-order expressions that faithfully reproduce the shapes. We then consider the dynamic case, showing that the predictions of our low-order model compare favourably with those obtained from direct numerical simulations. We also examine the characteristic flow regimes of interest. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).

  16. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  17. Defect in the Joint Spectrum of Hydrogen due to Monodromy.

    PubMed

    Dullin, Holger R; Waalkens, Holger

    2018-01-12

    In addition to the well-known case of spherical coordinates, the Schrödinger equation of the hydrogen atom separates in three further coordinate systems. Separating in a particular coordinate system defines a system of three commuting operators. We show that the joint spectrum of the Hamilton operator, the z component of the angular momentum, and an operator involving the z component of the quantum Laplace-Runge-Lenz vector obtained from separation in prolate spheroidal coordinates has quantum monodromy for energies sufficiently close to the ionization threshold. The precise value of the energy above which monodromy is observed depends on the distance of the focus points of the spheroidal coordinates. The presence of monodromy means that one cannot globally assign quantum numbers to the joint spectrum. Whereas the principal quantum number n and the magnetic quantum number m correspond to the Bohr-Sommerfeld quantization of globally defined classical actions a third quantum number cannot be globally defined because the third action is globally multivalued.

  18. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    PubMed

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Logics of Business Education for Sustainability

    ERIC Educational Resources Information Center

    Andersson, Pernilla; Öhman, Johan

    2016-01-01

    This paper explores various kinds of logics of "business education for sustainability" and how these "logics" position the subject business person, based on eight teachers' reasoning of their own practices. The concept of logics developed within a discourse theoretical framework is employed to analyse the teachers' reasoning.…

  20. Electronics. Module 3: Digital Logic Application. Instructor's Guide.

    ERIC Educational Resources Information Center

    Carter, Ed; Murphy, Mark

    This guide contains instructor's materials for a 10-unit secondary school course on digital logic application. The units are introduction to digital, logic gates, digital integrated circuits, combination logic, flip-flops, counters and shift registers, encoders and decoders, arithmetic circuits, memory, and analog/digital and digital/analog…

  1. Logic feels so good-I like it! Evidence for intuitive detection of logicality in syllogistic reasoning.

    PubMed

    Morsanyi, Kinga; Handley, Simon J

    2012-05-01

    When people evaluate syllogisms, their judgments of validity are often biased by the believability of the conclusions of the problems. Thus, it has been suggested that syllogistic reasoning performance is based on an interplay between a conscious and effortful evaluation of logicality and an intuitive appreciation of the believability of the conclusions (e.g., Evans, Newstead, Allen, & Pollard, 1994). However, logic effects in syllogistic reasoning emerge even when participants are unlikely to carry out a full logical analysis of the problems (e.g., Shynkaruk & Thompson, 2006). There is also evidence that people can implicitly detect the conflict between their beliefs and the validity of the problems, even if they are unable to consciously produce a logical response (e.g., De Neys, Moyens, & Vansteenwegen, 2010). In 4 experiments we demonstrate that people intuitively detect the logicality of syllogisms, and this effect emerges independently of participants' conscious mindset and their cognitive capacity. This logic effect is also unrelated to the superficial structure of the problems. Additionally, we provide evidence that the logicality of the syllogisms is detected through slight changes in participants' affective states. In fact, subliminal affective priming had an effect on participants' subjective evaluations of the problems. Finally, when participants misattributed their emotional reactions to background music, this significantly reduced the logic effect. (c) 2012 APA, all rights reserved.

  2. Adaptive logical stochastic resonance in time-delayed synthetic genetic networks

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Zheng, Wenbin; Song, Aiguo

    2018-04-01

    In the paper, the concept of logical stochastic resonance is applied to implement logic operation and latch operation in time-delayed synthetic genetic networks derived from a bacteriophage λ. Clear logic operation and latch operation can be obtained when the network is tuned by modulated periodic force and time-delay. In contrast with the previous synthetic genetic networks based on logical stochastic resonance, the proposed system has two advantages. On one hand, adding modulated periodic force to the background noise can increase the length of the optimal noise plateau of obtaining desired logic response and make the system adapt to varying noise intensity. On the other hand, tuning time-delay can extend the optimal noise plateau to larger range. The result provides possible help for designing new genetic regulatory networks paradigm based on logical stochastic resonance.

  3. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  4. Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor

    DOE PAGES

    Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...

    2017-02-28

    Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less

  5. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  6. Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.

    Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less

  7. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  8. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    PubMed

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.

  9. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China

    PubMed Central

    Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734

  10. [Study on Abnormal Topological Properties of Structural Brain Networks of Patients with Depression Comorbid with Anxiety].

    PubMed

    Wu, Xiuyong; Wu, Xiaoming; Peng, Hongjun; Ning, Yuping; Wu, Kai

    2016-06-01

    This paper is aimed to analyze the topological properties of structural brain networks in depressive patients with and without anxiety and to explore the neuropath logical mechanisms of depression comorbid with anxiety.Diffusion tensor imaging and deterministic tractography were applied to map the white matter structural networks.We collected 20 depressive patients with anxiety(DPA),18 depressive patients without anxiety(DP),and 28 normal controls(NC)as comparative groups.The global and nodal properties of the structural brain networks in the three groups were analyzed with graph theoretical methods.The result showed that1 the structural brain networks in three groups showed small-world properties and highly connected global hubs predominately from association cortices;2DP group showed lower local efficiency and global efficiency compared to NC group,whereas DPA group showed higher local efficiency and global efficiency compared to NC group;3significant differences of network properties(clustering coefficient,characteristic path lengths,local efficiency,global efficiency)were found between DPA and DP groups;4DP group showed significant changes of nodal efficiency in the brain areas primarily in the temporal lobe and bilateral frontal gyrus,compared to DPA and NC groups.The analysis indicated that the DP and DPA groups showed nodal properties of the structural brain networks,compared to NC group.Moreover,the two diseased groups indicated an opposite trend in the network properties.The results of this study may provide a new imaging index for clinical diagnosis for depression comorbid with anxiety.

  11. Forecasting of hourly load by pattern recognition in a small area power system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehdashti-Shahrokh, A.

    1982-01-01

    An intuitive, logical, simple and efficient method of forecasting hourly load in a small area power system is presented. A pattern recognition approach is used in developing the forecasting model. Pattern recognition techniques are powerful tools in the field of artificial intelligence (cybernetics) and simulate the way the human brain operates to make decisions. Pattern recognition is generally used in analysis of processes where the total physical nature behind the process variation is unkown but specific kinds of measurements explain their behavior. In this research basic multivariate analyses, in conjunction with pattern recognition techniques, are used to develop a linearmore » deterministic model to forecast hourly load. This method assumes that load patterns in the same geographical area are direct results of climatological changes (weather sensitive load), and have occurred in the past as a result of similar climatic conditions. The algorithm described in here searches for the best possible pattern from a seasonal library of load and weather data in forecasting hourly load. To accommodate the unpredictability of weather and the resulting load, the basic twenty-four load pattern was divided into eight three-hour intervals. This division was made to make the model adaptive to sudden climatic changes. The proposed method offers flexible lead times of one to twenty-four hours. The results of actual data testing had indicated that this proposed method is computationally efficient, highly adaptive, with acceptable data storage size and accuracy that is comparable to many other existing methods.« less

  12. 15 CFR 970.601 - Logical mining unit.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Resource Development Concepts § 970.601 Logical mining unit. (a) In the case of an exploration license, a logical mining unit is an... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Logical mining unit. 970.601 Section...

  13. 15 CFR 970.601 - Logical mining unit.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Resource Development Concepts § 970.601 Logical mining unit. (a) In the case of an exploration license, a logical mining unit is an... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Logical mining unit. 970.601 Section...

  14. 15 CFR 970.601 - Logical mining unit.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Resource Development Concepts § 970.601 Logical mining unit. (a) In the case of an exploration license, a logical mining unit is an... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Logical mining unit. 970.601 Section...

  15. Logic system aids in evaluation of project readiness

    NASA Technical Reports Server (NTRS)

    Maris, S. J.; Obrien, T. J.

    1966-01-01

    Measurement Operational Readiness Requirements /MORR/ assignments logic is used for determining the readiness of a complex project to go forward as planned. The system used logic network which assigns qualities to all important criteria in a project and establishes a logical sequence of measurements to determine what the conditions are.

  16. A New Approach to Teaching Mathematics

    DTIC Science & Technology

    1994-02-01

    We propose a new approach to teaching discrete math : First, teach logic as a powerful and versatile tool for discovering and communicating truths...using logic in other areas of study. Our experiences in teaching discrete math at Cornell shows that such success is possible. Propositional logic, Predicate logic, Discrete mathematics.

  17. Multiple and configurable optical logic systems based on layered double hydroxides and chromophore assemblies.

    PubMed

    Shi, Wenying; Fu, Yi; Li, Zhixiong; Wei, Min

    2015-01-14

    Multiple and configurable fluorescence logic gates were fabricated via self-assembly of layered double hydroxides and various chromophores. These logic gates were operated by observation of different emissions with the same excitation wavelength, which achieve YES, NOT, AND, INH and INHIBIT logic operations, respectively.

  18. Institutional Logics, Indie Software Developers and Platform Governance

    ERIC Educational Resources Information Center

    Qiu, Yixin

    2013-01-01

    This two-essay dissertation aims to study institutional logics in the context of Apple's independent third-party software developers. In essay 1, I investigate the embedded agency aspect of the institutional logics theory. It builds on the premise that logics constrain preferences, interests and behaviors of individuals and organizations, thereby…

  19. 15 CFR 970.601 - Logical mining unit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Resource Development Concepts § 970.601 Logical mining unit. (a) In the case of an exploration license, a logical mining unit is an... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Logical mining unit. 970.601 Section...

  20. 15 CFR 970.601 - Logical mining unit.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Resource Development Concepts § 970.601 Logical mining unit. (a) In the case of an exploration license, a logical mining unit is an... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Logical mining unit. 970.601 Section...

Top