Sample records for parallel logic programming

  1. Ordering Traces Logically to Identify Lateness in Message Passing Programs

    DOE PAGES

    Isaacs, Katherine E.; Gamblin, Todd; Bhatele, Abhinav; ...

    2015-03-30

    Event traces are valuable for understanding the behavior of parallel programs. However, automatically analyzing a large parallel trace is difficult, especially without a specific objective. We aid this endeavor by extracting a trace's logical structure, an ordering of trace events derived from happened-before relationships, while taking into account developer intent. Using this structure, we can calculate an operation's delay relative to its peers on other processes. The logical structure also serves as a platform for comparing and clustering processes as well as highlighting communication patterns in a trace visualization. We present an algorithm for determining this idealized logical structure frommore » traces of message passing programs, and we develop metrics to quantify delays and differences among processes. We implement our techniques in Ravel, a parallel trace visualization tool that displays both logical and physical timelines. Rather than showing the duration of each operation, we display where delays begin and end, and how they propagate. As a result, we apply our approach to the traces of several message passing applications, demonstrating the accuracy of our extracted structure and its utility in analyzing these codes.« less

  2. Putting time into proof outlines

    NASA Technical Reports Server (NTRS)

    Schneider, Fred B.; Bloom, Bard; Marzullo, Keith

    1991-01-01

    A logic for reasoning about timing of concurrent programs is presented. The logic is based on proof outlines and can handle maximal parallelism as well as resource-constrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action.

  3. MELD: A Logical Approach to Distributed and Parallel Programming

    DTIC Science & Technology

    2012-03-01

    0215 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Seth Copen Goldstein Flavio Cruz 5d. PROJECT NUMBER BI20 5e. TASK...Comp. Sci., vol. 50, pp. 1–102, 1987. [33] P. Ló pez, F. Pfenning, J. Polakow, and K. Watkins , “Monadic concurrent linear logic programming,” in

  4. Implementation of Multivariable Logic Functions in Parallel by Electrically Addressing a Molecule of Three Dopants in Silicon.

    PubMed

    Fresch, Barbara; Bocquel, Juanita; Hiluf, Dawit; Rogge, Sven; Levine, Raphael D; Remacle, Françoise

    2017-07-05

    To realize low-power, compact logic circuits, one can explore parallel operation on single nanoscale devices. An added incentive is to use multivalued (as distinct from Boolean) logic. Here, we theoretically demonstrate that the computation of all the possible outputs of a multivariate, multivalued logic function can be implemented in parallel by electrical addressing of a molecule made up of three interacting dopant atoms embedded in Si. The electronic states of the dopant molecule are addressed by pulsing a gate voltage. By simulating the time evolution of the non stationary electronic density built by the gate voltage, we show that one can implement a molecular decision tree that provides in parallel all the outputs for all the inputs of the multivariate, multivalued logic function. The outputs are encoded in the populations and in the bond orders of the dopant molecule, which can be measured using an STM tip. We show that the implementation of the molecular logic tree is equivalent to a spectral function decomposition. The function that is evaluated can be field-programmed by changing the time profile of the pulsed gate voltage. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Genetic Parallel Programming: design and implementation.

    PubMed

    Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong

    2006-01-01

    This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.

  6. A constraint logic programming approach to associate 1D and 3D structural components for large protein complexes.

    PubMed

    Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang

    2007-01-01

    The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.

  7. Putting time into proof outlines

    NASA Technical Reports Server (NTRS)

    Schneider, Fred B.; Bloom, Bard; Marzullo, Keith

    1993-01-01

    A logic for reasoning about timing properties of concurrent programs is presented. The logic is based on Hoare-style proof outlines and can handle maximal parallelism as well as certain resource-constrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action. A soundness proof using structural operational semantics is outlined in the appendix.

  8. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  9. Parallel Logic Programming and Parallel Systems Software and Hardware

    DTIC Science & Technology

    1989-07-29

    Conference, Dallas TX. January 1985. (55) [Rous75] Roussel, P., "PROLOG: Manuel de Reference et d’Uilisation", Group d’ Intelligence Artificielle , Universite d...completed. Tools were provided for software development using artificial intelligence techniques. Al software for massively parallel architectures was...using artificial intelligence tech- niques. Al software for massively parallel architectures was started. 1. Introduction We describe research conducted

  10. Functional and space programming.

    PubMed

    Hayward, C

    1988-01-01

    In this article, the author expands the earlier stated case for functional and space programming based on objective evidence of user needs. It provides an in-depth examination of the logic and processes of programming as a continuum which precedes, then parallels, architectural design.

  11. Implementation and performance of parallel Prolog interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, S.; Kale, L.V.; Balkrishna, R.

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  12. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  13. A Programming Environment for Parallel Vision Algorithms

    DTIC Science & Technology

    1990-04-11

    industrial arm on the market , while the unique head was designed by Rochester’s Computer Science and Mechanical Engineering Departments. 9a 4.1 Introduction...R. Constraining-Unification and the Programming Language Unicorn . In Logic Programming, Functions, Relations, and Equations, Degroot and Lind- strom

  14. Japanese project aims at supercomputer that executes 10 gflops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burskey, D.

    1984-05-03

    Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less

  15. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  16. Parallel Logic Programming Architecture

    DTIC Science & Technology

    1990-04-01

    Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during

  17. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  18. Role of PROLOG (Programming and Logic) in natural-language processing. Report for September-December 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHale, M.L.

    The field of artificial Intelligence strives to produce computer programs that exhibit intelligent behavior. One of the areas of interest is the processing of natural language. This report discusses the role of the computer language PROLOG in Natural Language Processing (NLP) both from theoretic and pragmatic viewpoints. The reasons for using PROLOG for NLP are numerous. First, linguists can write natural-language grammars almost directly as PROLOG programs; this allows fast-prototyping of NLP systems and facilitates analysis of NLP theories. Second, semantic representations of natural-language texts that use logic formalisms are readily produced in PROLOG because of PROLOG's logical foundations. Third,more » PROLOG's built-in inferencing mechanisms are often sufficient for inferences on the logical forms produced by NLPs. Fourth, the logical, declarative nature of PROLOG may make it the language of choice for parallel computing systems. Finally, the fact that PROLOG has a de facto standard (Edinburgh) makes the porting of code from one computer system to another virtually trouble free. Perhaps the strongest tie one could make between NLP and PROLOG was stated by John Stuart Mill in his inaugural Address at St. Andrews: The structure of every sentence is a lesson in logic.« less

  19. Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Tick, E.

    1991-11-01

    This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. Themore » fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.« less

  20. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  1. What is "the patient perspective" in patient engagement programs? Implicit logics and parallels to feminist theories.

    PubMed

    Rowland, Paula; McMillan, Sarah; McGillicuddy, Patti; Richards, Joy

    2017-01-01

    Public and patient involvement (PPI) in health care may refer to many different processes, ranging from participating in decision-making about one's own care to participating in health services research, health policy development, or organizational reforms. Across these many forms of public and patient involvement, the conceptual and theoretical underpinnings remain poorly articulated. Instead, most public and patient involvement programs rely on policy initiatives as their conceptual frameworks. This lack of conceptual clarity participates in dilemmas of program design, implementation, and evaluation. This study contributes to the development of theoretical understandings of public and patient involvement. In particular, we focus on the deployment of patient engagement programs within health service organizations. To develop a deeper understanding of the conceptual underpinnings of these programs, we examined the concept of "the patient perspective" as used by patient engagement practitioners and participants. Specifically, we focused on the way this phrase was used in the singular: "the" patient perspective or "the" patient voice. From qualitative analysis of interviews with 20 patient advisers and 6 staff members within a large urban health network in Canada, we argue that "the patient perspective" is referred to as a particular kind of situated knowledge, specifically an embodied knowledge of vulnerability. We draw parallels between this logic of patient perspective and the logic of early feminist theory, including the concepts of standpoint theory and strong objectivity. We suggest that champions of patient engagement may learn much from the way feminist theorists have constructed their arguments and addressed critique.

  2. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  3. Malleable architecture generator for FPGA computing

    NASA Astrophysics Data System (ADS)

    Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang

    1996-10-01

    The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.

  4. Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y.; Zhong, Y. P.; Deng, Y. F.

    2013-12-21

    Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.

  5. Local rollback for fault-tolerance in parallel computing systems

    DOEpatents

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  6. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  7. Broadcasting a message in a parallel computer

    DOEpatents

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  8. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  9. A DNA network as an information processing system.

    PubMed

    Santini, Cristina Costa; Bath, Jonathan; Turberfield, Andrew J; Tyrrell, Andy M

    2012-01-01

    Biomolecular systems that can process information are sought for computational applications, because of their potential for parallelism and miniaturization and because their biocompatibility also makes them suitable for future biomedical applications. DNA has been used to design machines, motors, finite automata, logic gates, reaction networks and logic programs, amongst many other structures and dynamic behaviours. Here we design and program a synthetic DNA network to implement computational paradigms abstracted from cellular regulatory networks. These show information processing properties that are desirable in artificial, engineered molecular systems, including robustness of the output in relation to different sources of variation. We show the results of numerical simulations of the dynamic behaviour of the network and preliminary experimental analysis of its main components.

  10. Fundamental physics issues of multilevel logic in developing a parallel processor.

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Anirban; Miki, Kazushi

    2007-06-01

    In the last century, On and Off physical switches, were equated with two decisions 0 and 1 to express every information in terms of binary digits and physically realize it in terms of switches connected in a circuit. Apart from memory-density increase significantly, more possible choices in particular space enables pattern-logic a reality, and manipulation of pattern would allow controlling logic, generating a new kind of processor. Neumann's computer is based on sequential logic, processing bits one by one. But as pattern-logic is generated on a surface, viewing whole pattern at a time is a truly parallel processing. Following Neumann's and Shannons fundamental thermodynamical approaches we have built compatible model based on series of single molecule based multibit logic systems of 4-12 bits in an UHV-STM. On their monolayer multilevel communication and pattern formation is experimentally verified. Furthermore, the developed intelligent monolayer is trained by Artificial Neural Network. Therefore fundamental weak interactions for the building of truly parallel processor are explored here physically and theoretically.

  11. Identifying a largest logical plane from a plurality of logical planes formed of compute nodes of a subcommunicator in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Kristan D.; Faraj, Daniel A.

    In a parallel computer, a largest logical plane from a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: identifying, by each compute node of the subcommunicator, all logical planes that include the compute node; calculating, by each compute node for each identified logical plane that includes the compute node, an area of the identified logical plane; initiating, by a root node of the subcommunicator, a gather operation; receiving, by the root node from each compute node of the subcommunicator, each node's calculated areas as contribution data to the gather operation; and identifying, bymore » the root node in dependence upon the received calculated areas, a logical plane of the subcommunicator having the greatest area.« less

  12. Demonstration of an optoelectronic interconnect architecture for a parallel modified signed-digit adder and subtracter

    NASA Astrophysics Data System (ADS)

    Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.

    1996-06-01

    A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.

  13. Evolution of a minimal parallel programming model

    DOE PAGES

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-04-30

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less

  14. Interconnect-free parallel logic circuits in a single mechanical resonator

    PubMed Central

    Mahboob, I.; Flurin, E.; Nishiguchi, K.; Fujiwara, A.; Yamaguchi, H.

    2011-01-01

    In conventional computers, wiring between transistors is required to enable the execution of Boolean logic functions. This has resulted in processors in which billions of transistors are physically interconnected, which limits integration densities, gives rise to huge power consumption and restricts processing speeds. A method to eliminate wiring amongst transistors by condensing Boolean logic into a single active element is thus highly desirable. Here, we demonstrate a novel logic architecture using only a single electromechanical parametric resonator into which multiple channels of binary information are encoded as mechanical oscillations at different frequencies. The parametric resonator can mix these channels, resulting in new mechanical oscillation states that enable the construction of AND, OR and XOR logic gates as well as multibit logic circuits. Moreover, the mechanical logic gates and circuits can be executed simultaneously, giving rise to the prospect of a parallel logic processor in just a single mechanical resonator. PMID:21326230

  15. Interconnect-free parallel logic circuits in a single mechanical resonator.

    PubMed

    Mahboob, I; Flurin, E; Nishiguchi, K; Fujiwara, A; Yamaguchi, H

    2011-02-15

    In conventional computers, wiring between transistors is required to enable the execution of Boolean logic functions. This has resulted in processors in which billions of transistors are physically interconnected, which limits integration densities, gives rise to huge power consumption and restricts processing speeds. A method to eliminate wiring amongst transistors by condensing Boolean logic into a single active element is thus highly desirable. Here, we demonstrate a novel logic architecture using only a single electromechanical parametric resonator into which multiple channels of binary information are encoded as mechanical oscillations at different frequencies. The parametric resonator can mix these channels, resulting in new mechanical oscillation states that enable the construction of AND, OR and XOR logic gates as well as multibit logic circuits. Moreover, the mechanical logic gates and circuits can be executed simultaneously, giving rise to the prospect of a parallel logic processor in just a single mechanical resonator.

  16. Executing a gather operation on a parallel computer

    DOEpatents

    Archer, Charles J [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2012-03-20

    Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

  17. Parallel and Multivalued Logic by the Two-Dimensional Photon-Echo Response of a Rhodamine–DNA Complex

    PubMed Central

    2015-01-01

    Implementing parallel and multivalued logic operations at the molecular scale has the potential to improve the miniaturization and efficiency of a new generation of nanoscale computing devices. Two-dimensional photon-echo spectroscopy is capable of resolving dynamical pathways on electronic and vibrational molecular states. We experimentally demonstrate the implementation of molecular decision trees, logic operations where all possible values of inputs are processed in parallel and the outputs are read simultaneously, by probing the laser-induced dynamics of populations and coherences in a rhodamine dye mounted on a short DNA duplex. The inputs are provided by the bilinear interactions between the molecule and the laser pulses, and the output values are read from the two-dimensional molecular response at specific frequencies. Our results highlights how ultrafast dynamics between multiple molecular states induced by light–matter interactions can be used as an advantage for performing complex logic operations in parallel, operations that are faster than electrical switching. PMID:25984269

  18. Development of an optical parallel logic device and a half-adder circuit for digital optical processing

    NASA Technical Reports Server (NTRS)

    Athale, R. A.; Lee, S. H.

    1978-01-01

    The paper describes the fabrication and operation of an optical parallel logic (OPAL) device which performs Boolean algebraic operations on binary images. Several logic operations on two input binary images were demonstrated using an 8 x 8 device with a CdS photoconductor and a twisted nematic liquid crystal. Two such OPAL devices can be interconnected to form a half-adder circuit which is one of the essential components of a CPU in a digital signal processor.

  19. Spin wave based parallel logic operations for binary data coded with domain walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urazuka, Y.; Oyabu, S.; Chen, H.

    2014-05-07

    We numerically investigate the feasibility of spin wave (SW) based parallel logic operations, where the phase of SW packet (SWP) is exploited as a state variable and the phase shift caused by the interaction with domain wall (DW) is utilized as a logic inversion functionality. A designed functional element consists of parallel ferromagnetic nanowires (6 nm-thick, 36 nm-width, 5120 nm-length, and 200 nm separation) with the perpendicular magnetization and sub-μm scale overlaid conductors. The logic outputs for binary data, coded with the existence (“1”) or absence (“0”) of the DW, are inductively read out from interferometric aspect of the superposed SWPs, one of themmore » propagating through the stored data area. A practical exclusive-or operation, based on 2π periodicity in the phase logic, is demonstrated for the individual nanowire with an order of different output voltage V{sub out}, depending on the logic output for the stored data. The inductive output from the two nanowires exhibits well defined three different signal levels, corresponding to the information distance (Hamming distance) between 2-bit data stored in the multiple nanowires.« less

  20. Microfluidic Pneumatic Logic Circuits and Digital Pneumatic Microprocessors for Integrated Microfluidic Systems

    PubMed Central

    Rhee, Minsoung

    2010-01-01

    We have developed pneumatic logic circuits and microprocessors built with microfluidic channels and valves in polydimethylsiloxane (PDMS). The pneumatic logic circuits perform various combinational and sequential logic calculations with binary pneumatic signals (atmosphere and vacuum), producing cascadable outputs based on Boolean operations. A complex microprocessor is constructed from combinations of various logic circuits and receives pneumatically encoded serial commands at a single input line. The device then decodes the temporal command sequence by spatial parallelization, computes necessary logic calculations between parallelized command bits, stores command information for signal transportation and maintenance, and finally executes the command for the target devices. Thus, such pneumatic microprocessors will function as a universal on-chip control platform to perform complex parallel operations for large-scale integrated microfluidic devices. To demonstrate the working principles, we have built 2-bit, 3-bit, 4-bit, and 8-bit microprecessors to control various target devices for applications such as four color dye mixing, and multiplexed channel fluidic control. By significantly reducing the need for external controllers, the digital pneumatic microprocessor can be used as a universal on-chip platform to autonomously manipulate microfluids in a high throughput manner. PMID:19823730

  1. Estimation of CO2 reduction by parallel hard-type power hybridization for gasoline and diesel vehicles.

    PubMed

    Oh, Yunjung; Park, Junhong; Lee, Jong Tae; Seo, Jigu; Park, Sungwook

    2017-10-01

    The purpose of this study is to investigate possible improvements in ICEVs by implementing fuzzy logic-based parallel hard-type power hybrid systems. Two types of conventional ICEVs (gasoline and diesel) and two types of HEVs (gasoline-electric, diesel electric) were generated using vehicle and powertrain simulation tools and a Matlab-Simulink application programming interface. For gasoline and gasoline-electric HEV vehicles, the prediction accuracy for four types of LDV models was validated by conducting comparative analysis with the chassis dynamometer and OBD test data. The predicted results show strong correlation with the test data. The operating points of internal combustion engines and electric motors are well controlled in the high efficiency region and battery SOC was well controlled within ±1.6%. However, for diesel vehicles, we generated virtual diesel-electric HEV vehicle because there is no available vehicles with similar engine and vehicle specifications with ICE vehicle. Using a fuzzy logic-based parallel hybrid system in conventional ICEVs demonstrated that HEVs showed superior performance in terms of fuel consumption and CO 2 emission in most driving modes. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Efficient Thread Labeling for Monitoring Programs with Nested Parallelism

    NASA Astrophysics Data System (ADS)

    Ha, Ok-Kyoon; Kim, Sun-Sook; Jun, Yong-Kee

    It is difficult and cumbersome to detect data races occurred in an execution of parallel programs. Any on-the-fly race detection techniques using Lamport's happened-before relation needs a thread labeling scheme for generating unique identifiers which maintain logical concurrency information for the parallel threads. NR labeling is an efficient thread labeling scheme for the fork-join program model with nested parallelism, because its efficiency depends only on the nesting depth for every fork and join operation. This paper presents an improved NR labeling, called e-NR labeling, in which every thread generates its label by inheriting the pointer to its ancestor list from the parent threads or by updating the pointer in a constant amount of time and space. This labeling is more efficient than the NR labeling, because its efficiency does not depend on the nesting depth for every fork and join operation. Some experiments were performed with OpenMP programs having nesting depths of three or four and maximum parallelisms varying from 10,000 to 1,000,000. The results show that e-NR is 5 times faster than NR labeling and 4.3 times faster than OS labeling in the average time for creating and maintaining the thread labels. In average space required for labeling, it is 3.5 times smaller than NR labeling and 3 times smaller than OS labeling.

  3. Photorefractive optical fuzzy-logic processor based on grating degeneracy

    NASA Astrophysics Data System (ADS)

    Wu, Weishu; Yang, Changxi; Campbell, Scott; Yeh, Pochi

    1995-04-01

    A novel optical fuzzy-logic processor using light-induced gratings in photorefractive crystals is proposed and demonstrated. By exploiting grating degeneracy, one can easily implement parallel fuzzy-logic functions in disjunctive normal form.

  4. Heuristic and analytic processes in reasoning: an event-related potential study of belief bias.

    PubMed

    Banks, Adrian P; Hope, Christopher

    2014-03-01

    Human reasoning involves both heuristic and analytic processes. This study of belief bias in relational reasoning investigated whether the two processes occur serially or in parallel. Participants evaluated the validity of problems in which the conclusions were either logically valid or invalid and either believable or unbelievable. Problems in which the conclusions presented a conflict between the logically valid response and the believable response elicited a more positive P3 than problems in which there was no conflict. This shows that P3 is influenced by the interaction of belief and logic rather than either of these factors on its own. These findings indicate that belief and logic influence reasoning at the same time, supporting models in which belief-based and logical evaluations occur in parallel but not theories in which belief-based heuristic evaluations precede logical analysis.

  5. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias.

    PubMed

    Trippas, Dries; Thompson, Valerie A; Handley, Simon J

    2017-05-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was valid on half the trials or to decide whether the conclusion was believable on the other half. When belief and logic conflict, the default-interventionist view predicts that it should take less time to respond on the basis of belief than logic, and that the believability of a conclusion should interfere with judgments of validity, but not the reverse. The parallel-processing view predicts that beliefs should interfere with logic judgments only if the processing required to evaluate the logical structure exceeds that required to evaluate the knowledge necessary to make a belief-based judgment, and vice versa otherwise. Consistent with this latter view, for the simplest reasoning problems (modus ponens), judgments of belief resulted in lower accuracy than judgments of validity, and believability interfered more with judgments of validity than the converse. For problems of moderate complexity (modus tollens and single-model syllogisms), the interference was symmetrical, in that validity interfered with belief judgments to the same degree that believability interfered with validity judgments. For the most complex (three-term multiple-model syllogisms), conclusion believability interfered more with judgments of validity than vice versa, in spite of the significant interference from conclusion validity on judgments of belief.

  6. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  7. Simultaneous G-Quadruplex DNA Logic.

    PubMed

    Bader, Antoine; Cockroft, Scott L

    2018-04-03

    A fundamental principle of digital computer operation is Boolean logic, where inputs and outputs are described by binary integer voltages. Similarly, inputs and outputs may be processed on the molecular level as exemplified by synthetic circuits that exploit the programmability of DNA base-pairing. Unlike modern computers, which execute large numbers of logic gates in parallel, most implementations of molecular logic have been limited to single computing tasks, or sensing applications. This work reports three G-quadruplex-based logic gates that operate simultaneously in a single reaction vessel. The gates respond to unique Boolean DNA inputs by undergoing topological conversion from duplex to G-quadruplex states that were resolved using a thioflavin T dye and gel electrophoresis. The modular, addressable, and label-free approach could be incorporated into DNA-based sensors, or used for resolving and debugging parallel processes in DNA computing applications. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  9. Identifying logical planes formed of compute nodes of a subcommunicator in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Kristan D.; Faraj, Daniel

    In a parallel computer, a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: for each compute node of the subcommunicator and for a number of dimensions beginning with a first dimension: establishing, by a plane building node, in a positive direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in a positive direction of a second dimension, where the second dimension is orthogonal to the first dimension; and establishing, by the plane building node, in a negative direction of the first dimension,more » all logical planes that include the plane building node and compute nodes of the subcommunicator in the positive direction of the second dimension.« less

  10. The PASM Parallel Processing System: Hardware Design and Intelligent Operating System Concepts

    DTIC Science & Technology

    1986-07-01

    IND-3 Jac Logic 0ISCAUTO-3 UK Jus Parallel IrAorf act Pori 90-7 el MS. IND-3 P110-3 Logic = .CUTO-3 AC-4 0 Sow PAIS WK.110-7 --------- CSS CC. THO...process communication are part of the ment, which must be part of the task body: jitsu VP-20043 uses 32-bit integers. Pro- language. The compiler actually

  11. Generalized Philosophy of Alerting with Applications for Parallel Approach Collision Prevention

    NASA Technical Reports Server (NTRS)

    Winder, Lee F.; Kuchar, James K.

    2000-01-01

    The goal of the research was to develop formal guidelines for the design of hazard avoidance systems. An alerting system is automation designed to reduce the likelihood of undesirable outcomes that are due to rare failures in a human-controlled system. It accomplishes this by monitoring the system, and issuing warning messages to the human operators when thought necessary to head off a problem. On examination of existing and recently proposed logics for alerting it appears that few commonly accepted principles guide the design process. Different logics intended to address the same hazards may take disparate forms and emphasize different aspects of performance, because each reflects the intuitive priorities of a different designer. Because performance must be satisfactory to all users of an alerting system (implying a universal meaning of acceptable performance) and not just one designer, a proposed logic often undergoes significant piecemeal modification before gamma general acceptance. This report is an initial attempt to clarify the common performance goals by which an alerting system is ultimately judged. A better understanding of these goals will hopefully allow designers to reach the final logic in a quicker, more direct and repeatable manner. As a case study, this report compares three alerting logics for collision prevention during independent approaches to parallel runways, and outlines a fourth alternative incorporating elements of the first three, but satisfying stated requirements. Three existing logics for parallel approach alerting are described. Each follows from different intuitive principles. The logics are presented as examples of three "philosophies" of alerting system design.

  12. Parallel software support for computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1987-01-01

    The application of the parallel programming methodology known as the Force was conducted. Two application issues were addressed. The first involves the efficiency of the implementation and its completeness in terms of satisfying the needs of other researchers implementing parallel algorithms. Support for, and interaction with, other Computational Structural Mechanics (CSM) researchers using the Force was the main issue, but some independent investigation of the Barrier construct, which is extremely important to overall performance, was also undertaken. Another efficiency issue which was addressed was that of relaxing the strong synchronization condition imposed on the self-scheduled parallel DO loop. The Force was extended by the addition of logical conditions to the cases of a parallel case construct and by the inclusion of a self-scheduled version of this construct. The second issue involved applying the Force to the parallelization of finite element codes such as those found in the NICE/SPAR testbed system. One of the more difficult problems encountered is the determination of what information in COMMON blocks is actually used outside of a subroutine and when a subroutine uses a COMMON block merely as scratch storage for internal temporary results.

  13. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    PubMed

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  14. BioMake: a GNU make-compatible utility for declarative workflow management.

    PubMed

    Holmes, Ian H; Mungall, Christopher J

    2017-11-01

    The Unix 'make' program is widely used in bioinformatics pipelines, but suffers from problems that limit its application to large analysis datasets. These include reliance on file modification times to determine whether a target is stale, lack of support for parallel execution on clusters, and restricted flexibility to extend the underlying logic program. We present BioMake, a make-like utility that is compatible with most features of GNU Make and adds support for popular cluster-based job-queue engines, MD5 signatures as an alternative to timestamps, and logic programming extensions in Prolog. BioMake is available for MacOSX and Linux systems from https://github.com/evoldoers/biomake under the BSD3 license. The only dependency is SWI-Prolog (version 7), available from http://www.swi-prolog.org/. ihholmes + biomake@gmail.com or cmungall + biomake@gmail.com. Feature table comparing BioMake to similar tools. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Photochromic molecular implementations of universal computation.

    PubMed

    Chaplin, Jack C; Krasnogor, Natalio; Russell, Noah A

    2014-12-01

    Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation. Previously we have demonstrated how registers, logic gates and logic circuits can be implemented, unconventionally, with a biocompatible molecular switch, NitroBIPS, embedded in a polymer matrix. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes in a manner that is dependent on its molecular form. Thus, one possible application of this type of unconventional computing is to embed computational processes into biological systems. Here we expand on our earlier proof-of-principle work and demonstrate that universal computation can be implemented using NitroBIPS. We have previously shown that spatially localised computational elements, including registers and logic gates, can be produced. We explain how parallel registers can be implemented, then demonstrate an application of parallel registers in the form of Turing machine tapes, and demonstrate both parallel registers and logic circuits in the form of elementary cellular automata. The Turing machines and elementary cellular automata utilise the same samples and same hardware to implement their registers, logic gates and logic circuits; and both represent examples of universal computing paradigms. This shows that homogenous photochromic computational devices can be dynamically repurposed without invasive reconfiguration. The result represents an important, necessary step towards demonstrating the general feasibility of interfacial computation embedded in biological systems or other unconventional materials and environments. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  16. Embedding global and collective in a torus network with message class map based tree path selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Coteus, Paul W.; Eisley, Noel A.

    Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computermore » program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.« less

  17. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  18. Hierarchical Fuzzy Control Applied to Parallel Connected UPS Inverters Using Average Current Sharing Scheme

    NASA Astrophysics Data System (ADS)

    Singh, Santosh Kumar; Ghatak Choudhuri, Sumit

    2018-05-01

    Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.

  19. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  20. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  1. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  2. AFL-1: A programming Language for Massively Concurrent Computers.

    DTIC Science & Technology

    1986-11-01

    Bibliography Ackley, D.H., Hinton, G.E., Sejnowski, T.J., "A Learning Algorithm for boltzmann Machines", Cognitive Science, 1985, 9, 147-169. Agre...P.E., "Routines", Memo 828, MIT AI Laboratory, Many 1985. Ballard, D.H., Hayes, P.J., "Parallel Logical Inference", Conference of the Cognitive Science...34Experiments on Semantic Memory and Language Com- 125 prehension", in L.W. Greg (Ed.), Cognition in Learning and Memory, New York, Wiley, 1972._ Collins

  3. A tristate optical logic system

    NASA Astrophysics Data System (ADS)

    Basuray, A.; Mukhopadhyay, S.; Kumar Ghosh, Hirak; Datta, A. K.

    1991-09-01

    A method is described to represent data in a tristate logic system which are subsequently replaced by Modified Trinary Numbers (MTN). This system is advantagegeous in parallel processing as carry and borrow free operations in arithmatic computation is possible. The logical operations are also modified according to the three states available. A possible practical application of the same using polarized light is also suggested.

  4. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2016-11-15

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.

  5. Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.

    PubMed

    Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing

    2013-11-01

    The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.

  6. Graphical approach for multiple values logic minimization

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  7. Parallel database search and prime factorization with magnonic holographic memory devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khitun, Alexander

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploitmore » wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.« less

  8. Parallel database search and prime factorization with magnonic holographic memory devices

    NASA Astrophysics Data System (ADS)

    Khitun, Alexander

    2015-12-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  9. Parallel image logical operations using cross correlation

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1972-01-01

    Methods are presented for counting areas in an image in a parallel manner using noncoherent optical techniques. The techniques presented include the Levialdi algorithm for counting, optical techniques for binary operations, and cross-correlation.

  10. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A; Heidelberger, Philip

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less

  11. A CS1 pedagogical approach to parallel thinking

    NASA Astrophysics Data System (ADS)

    Rague, Brian William

    Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of any initial three-week CS1 level module when compared with student comprehension levels just prior to starting the course. Survey results measured during the ninth week of the course reveal that performance levels remained high compared to pre-course performance scores. A second result produced by this study reveals no statistically significant interaction effect between the intervention method and student performance as measured by the evaluation instrument over three separate testing periods. However, visual inspection of survey score trends and the low p-value generated by the interaction analysis (0.062) indicate that further studies may verify improved concept retention levels for the lecture w/PAT group.

  12. Collective communications apparatus and method for parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knies, Allan D.; Keppel, David Pardo; Woo, Dong Hyuk

    A collective communication apparatus and method for parallel computing systems. For example, one embodiment of an apparatus comprises a plurality of processor elements (PEs); collective interconnect logic to dynamically form a virtual collective interconnect (VCI) between the PEs at runtime without global communication among all of the PEs, the VCI defining a logical topology between the PEs in which each PE is directly communicatively coupled to a only a subset of the remaining PEs; and execution logic to execute collective operations across the PEs, wherein one or more of the PEs receive first results from a first portion of themore » subset of the remaining PEs, perform a portion of the collective operations, and provide second results to a second portion of the subset of the remaining PEs.« less

  13. Design of neurophysiologically motivated structures of time-pulse coded neurons

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.

    2009-04-01

    The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.

  14. Effecting a broadcast with an allreduce operation on a parallel computer

    DOEpatents

    Almasi, Gheorghe; Archer, Charles J.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    A parallel computer comprises a plurality of compute nodes organized into at least one operational group for collective parallel operations. Each compute node is assigned a unique rank and is coupled for data communications through a global combining network. One compute node is assigned to be a logical root. A send buffer and a receive buffer is configured. Each element of a contribution of the logical root in the send buffer is contributed. One or more zeros corresponding to a size of the element are injected. An allreduce operation with a bitwise OR using the element and the injected zeros is performed. And the result for the allreduce operation is determined and stored in each receive buffer.

  15. Regulatory logic of pan-neuronal gene expression in C. elegans

    PubMed Central

    Stefanakis, Nikolaos; Carrera, Ines; Hobert, Oliver

    2015-01-01

    While neuronal cell types display an astounding degree of phenotypic diversity, most if not all neuron types share a core panel of terminal features. However, little is known about how pan-neuronal expression patterns are genetically programmed. Through an extensive analysis of the cis-regulatory control regions of a battery of pan-neuronal C.elegans genes, including genes involved in synaptic vesicle biology and neuropeptide signaling, we define a common organizational principle in the regulation of pan-neuronal genes in the form of a surprisingly complex array of seemingly redundant, parallel-acting cis-regulatory modules that direct expression to broad, overlapping domains throughout the nervous system. These parallel-acting cis-regulatory modules are responsive to a multitude of distinct trans-acting factors. Neuronal gene expression programs therefore fall into two fundamentally distinct classes. Neuron type-specific genes are generally controlled by discrete and non-redundantly acting regulatory inputs, while pan-neuronal gene expression is controlled by diverse, coincident and seemingly redundant regulatory inputs. PMID:26291158

  16. Embedding global barrier and collective in torus network with each node combining input from receivers according to class map for output to senders

    DOEpatents

    Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Heidelberger, Philip; Senger, Robert M; Salapura, Valentina; Steinmacher-Burow, Burkhard; Sugawara, Yutaka; Takken, Todd E

    2013-08-27

    Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.

  17. DNA-mediated gold nanoparticle signal transducers for combinatorial logic operations and heavy metal ions sensing.

    PubMed

    Zhang, Yuhuan; Liu, Wei; Zhang, Wentao; Yu, Shaoxuan; Yue, Xiaoyue; Zhu, Wenxin; Zhang, Daohong; Wang, Yanru; Wang, Jianlong

    2015-10-15

    Herein, the structure of two DNA strands which are complementary except fourteen T-T and C-C mismatches was programmed for the design of the combinatorial logic operation by utilizing the different protective capacities of single chain DNA, part-hybridized DNA and completed-hybridized DNA on unmodified gold nanoparticles. In the presence of either Hg(2+) or Ag(+), the T-Hg(2+)-T or C-Ag(+)-C coordination chemistry could lead to the formation of part-hybridized DNA which keeps gold nanoparticles from clumping after the addition of 40 μL 0.2M NaClO4 solution, but the protection would be screened by 120 μL 0.2M NaClO4 solution. While the coexistence of Hg(2+), Ag(+) caused the formation of completed-hybridized DNA and the protection for gold nanoparticles lost in either 40 μL or 120 μL NaClO4 solutions. Benefiting from sharing of the same inputs of Hg(2+) and Ag(+), OR and AND logic gates were easily integrated into a simple colorimetric combinatorial logic operation in one system, which make it possible to execute logic gates in parallel to mimic arithmetic calculations on a binary digit. Furthermore, two other logic gates including INHIBIT1 and INHIBIT2 were realized to integrated with OR logic gate both for simultaneous qualitative discrimination and quantitative determination of Hg(2+) and Ag(+). Results indicate that the developed logic system based on the different protective capacities of DNA structure on gold nanoparticles provides a new pathway for the design of the combinatorial logic operation in one system and presents a useful strategy for development of advanced sensors, which may have potential applications in multiplex chemical analysis and molecular-scale computer design. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  19. AMS data production facilities at science operations center at CERN

    NASA Astrophysics Data System (ADS)

    Choutko, V.; Egorov, A.; Eline, A.; Shan, B.

    2017-10-01

    The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment on the board of the International Space Station (ISS). This paper presents the hardware and software facilities of Science Operation Center (SOC) at CERN. Data Production is built around production server - a scalable distributed service which links together a set of different programming modules for science data transformation and reconstruction. The server has the capacity to manage 1000 paralleled job producers, i.e. up to 32K logical processors. Monitoring and management tool with Production GUI is also described.

  20. Tensor Arithmetic, Geometric and Mathematic Principles of Fluid Mechanics in Implementation of Direct Computational Experiments

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Khramushin, Vasily

    2016-02-01

    The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.

  1. Enhancing programming logic thinking using analogy mapping

    NASA Astrophysics Data System (ADS)

    Sukamto, R. A.; Megasari, R.

    2018-05-01

    Programming logic thinking is the most important competence for computer science students. However, programming is one of the difficult subject in computer science program. This paper reports our work about enhancing students' programming logic thinking using Analogy Mapping for basic programming subject. Analogy Mapping is a computer application which converts source code into analogies images. This research used time series evaluation and the result showed that Analogy Mapping can enhance students' programming logic thinking.

  2. Devil is in the details: Using logic models to investigate program process.

    PubMed

    Peyton, David J; Scicchitano, Michael

    2017-12-01

    Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Designing a Software Tool for Fuzzy Logic Programming

    NASA Astrophysics Data System (ADS)

    Abietar, José M.; Morcillo, Pedro J.; Moreno, Ginés

    2007-12-01

    Fuzzy Logic Programming is an interesting and still growing research area that agglutinates the efforts for introducing fuzzy logic into logic programming (LP), in order to incorporate more expressive resources on such languages for dealing with uncertainty and approximated reasoning. The multi-adjoint logic programming approach is a recent and extremely flexible fuzzy logic paradigm for which, unfortunately, we have not found practical tools implemented so far. In this work, we describe a prototype system which is able to directly translate fuzzy logic programs into Prolog code in order to safely execute these residual programs inside any standard Prolog interpreter in a completely transparent way for the final user. We think that the development of such fuzzy languages and programing tools might play an important role in the design of advanced software applications for computational physics, chemistry, mathematics, medicine, industrial control and so on.

  4. Logic models as a tool for sexual violence prevention program development.

    PubMed

    Hawkins, Stephanie R; Clinton-Sherrod, A Monique; Irvin, Neil; Hart, Laurie; Russell, Sarah Jane

    2009-01-01

    Sexual violence is a growing public health problem, and there is an urgent need to develop sexual violence prevention programs. Logic models have emerged as a vital tool in program development. The Centers for Disease Control and Prevention funded an empowerment evaluation designed to work with programs focused on the prevention of first-time male perpetration of sexual violence, and it included as one of its goals, the development of program logic models. Two case studies are presented that describe how significant positive changes can be made to programs as a result of their developing logic models that accurately describe desired outcomes. The first case study describes how the logic model development process made an organization aware of the importance of a program's environmental context for program success; the second case study demonstrates how developing a program logic model can elucidate gaps in organizational programming and suggest ways to close those gaps.

  5. Generalized look-ahead number conversion from signed digit to complement representation with optical logic operations

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Li, Guoqiang

    2001-12-01

    In this paper a generalized look-ahead logic algorithm for number conversion from signed-digit to its complement representation is developed. By properly encoding the signed digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed-digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quaternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using electron-trapping device is employed, which is suitable for realizing complex logic functions in the form of sum-of-product. The proposed algorithm and architecture are compatible with a general-purpose optoelectronic computing system.

  6. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less

  8. Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit. REL 2015-057

    ERIC Educational Resources Information Center

    Shakman, Karen; Rodriguez, Sheila M.

    2015-01-01

    The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to…

  9. Parallel eigenanalysis of finite element models in a completely connected architecture

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  10. Programmed optoelectronic time-pulse coded relational processor as base element for sorting neural networks

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.

    2007-04-01

    In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.

  11. Deciding Full Branching Time Logic by Program Transformation

    NASA Astrophysics Data System (ADS)

    Pettorossi, Alberto; Proietti, Maurizio; Senni, Valerio

    We present a method based on logic program transformation, for verifying Computation Tree Logic (CTL*) properties of finite state reactive systems. The finite state systems and the CTL* properties we want to verify, are encoded as logic programs on infinite lists. Our verification method consists of two steps. In the first step we transform the logic program that encodes the given system and the given property, into a monadic ω -program, that is, a stratified program defining nullary or unary predicates on infinite lists. This transformation is performed by applying unfold/fold rules that preserve the perfect model of the initial program. In the second step we verify the property of interest by using a proof method for monadic ω-programs.

  12. Multi-variants synthesis of Petri nets for FPGA devices

    NASA Astrophysics Data System (ADS)

    Bukowiec, Arkadiusz; Doligalski, Michał

    2015-09-01

    There is presented new method of synthesis of application specific logic controllers for FPGA devices. The specification of control algorithm is made with use of control interpreted Petri net (PT type). It allows specifying parallel processes in easy way. The Petri net is decomposed into state-machine type subnets. In this case, each subnet represents one parallel process. For this purpose there are applied algorithms of coloring of Petri nets. There are presented two approaches of such decomposition: with doublers of macroplaces or with one global wait place. Next, subnets are implemented into two-level logic circuit of the controller. The levels of logic circuit are obtained as a result of its architectural decomposition. The first level combinational circuit is responsible for generation of next places and second level decoder is responsible for generation output symbols. There are worked out two variants of such circuits: with one shared operational memory or with many flexible distributed memories as a decoder. Variants of Petri net decomposition and structures of logic circuits can be combined together without any restrictions. It leads to existence of four variants of multi-variants synthesis.

  13. A 32-bit Ultrafast Parallel Correlator using Resonant Tunneling Devices

    NASA Technical Reports Server (NTRS)

    Kulkarni, Shriram; Mazumder, Pinaki; Haddad, George I.

    1995-01-01

    An ultrafast 32-bit pipeline correlator has been implemented using resonant tunneling diodes (RTD) and hetero-junction bipolar transistors (HBT). The negative differential resistance (NDR) characteristics of RTD's is the basis of logic gates with the self-latching property that eliminates pipeline area and delay overheads which limit throughput in conventional technologies. The circuit topology also allows threshold logic functions such as minority/majority to be implemented in a compact manner resulting in reduction of the overall complexity and delay of arbitrary logic circuits. The parallel correlator is an essential component in code division multi-access (CDMA) transceivers used for the continuous calculation of correlation between an incoming data stream and a PN sequence. Simulation results show that a nano-pipelined correlator can provide and effective throughput of one 32-bit correlation every 100 picoseconds, using minimal hardware, with a power dissipation of 1.5 watts. RTD plus HBT based logic gates have been fabricated and the RTD plus HBT based correlator is compared with state of the art complementary metal oxide semiconductor (CMOS) implementations.

  14. Code conversion from signed-digit to complement representation based on look-ahead optical logic operations

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Qian, Feng

    2001-11-01

    We present, for the first time to our knowledge, a generalized lookahead logic algorithm for number conversion from signed-digit to complement representation. By properly encoding the signed-digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed- digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quarternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using an electron-trapping device is employed and experimental results are shown. This optical module is suitable for implementing complex logic functions in the form of the sum of the product. The algorithm and architecture are compatible with a general-purpose optoelectronic computing system.

  15. The myocardial microangiopathy in human and experimental diabetes mellitus. (A microscopic, ultrastructural, morphometric and computer-assisted symbolic-logic analysis).

    PubMed

    Taşcă, C; Stefăneanu, L; Vasilescu, C

    1986-01-01

    The following microscopical aspects were found in the small intramural arteries in the myocardium of 30 diabetic patients: endothelial proliferations with focal protuberances leading to partial narrowing of the lumen, increased thickness of the arterial wall due to fibrosis and accumulations of neutral mucopolysaccharides: alteration of elastic fibres. Morphometrically, the arterial wall thickness and the arterial diameter were increased whereas the arterial density decreased in the diabetic heart. In 25 rats with streptozotocin-induced diabetes the small intramyocardial arteries were investigated at 11 to 40 weeks of diabetic state. Using morphometrical analysis a constant increase of arterial wall thickness paralleling the diabetes duration was found. Microscopically, the lesions consist in endothelial proliferation with bridging across the vascular lumen and slight perivascular and diffuse fibrosis. Ultrastructurally, the capillary basal lamina was thickened in the diabetic myocardium. In order to investigate the morphometrical data we used symbolic-logic as a decision method, by applying an original computer program based on the Quine-McCluskey algorithm. All our results together with the final symbolic-logic expression suggest that damage of the small intramyocardial arteries plays an important role in the pathogenesis of diabetic cardiomyopathy.

  16. Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel

    2002-01-01

    This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.

  17. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  18. Encoding Schemes For A Digital Optical Multiplier Using The Modified Signed-Digit Number Representation

    NASA Astrophysics Data System (ADS)

    Lasher, Mark E.; Henderson, Thomas B.; Drake, Barry L.; Bocker, Richard P.

    1986-09-01

    The modified signed-digit (MSD) number representation offers full parallel, carry-free addition. A MSD adder has been described by the authors. This paper describes how the adder can be used in a tree structure to implement an optical multiply algorithm. Three different optical schemes, involving position, polarization, and intensity encoding, are proposed for realizing the trinary logic system. When configured in the generic multiplier architecture, these schemes yield the combinatorial logic necessary to carry out the multiplication algorithm. The optical systems are essentially three dimensional arrangements composed of modular units. Of course, this modularity is important for design considerations, while the parallelism and noninterfering communication channels of optical systems are important from the standpoint of reduced complexity. The authors have also designed electronic hardware to demonstrate and model the combinatorial logic required to carry out the algorithm. The electronic and proposed optical systems will be compared in terms of complexity and speed.

  19. Broadcasting a message in a parallel computer

    DOEpatents

    Archer, Charles J; Faraj, Ahmad A

    2013-04-16

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer that includes: transmitting, by the logical root to all of the nodes directly connected to the logical root, a message; and for each node except the logical root: receiving the message; if that node is the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received; if that node received the message from a parent node and if that node is not a leaf node, then transmitting the message to all of the child nodes; and if that node received the message from a child node and if that node is not the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received and transmitting the message to the parent node.

  20. Broadcasting a message in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer that includes: transmitting, by the logical root to all of the nodes directly connected to the logical root, a message; and for each node except the logical root: receiving the message; if that node is the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received; if that node received the message from a parent node and if that node is not a leaf node, then transmitting the message to all of the childmore » nodes; and if that node received the message from a child node and if that node is not the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received and transmitting the message to the parent node.« less

  1. Visual-area coding technique (VACT): optical parallel implementation of fuzzy logic and its visualization with the digital-halftoning process

    NASA Astrophysics Data System (ADS)

    Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki

    1995-06-01

    A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.

  2. Program logic: a framework for health program design and evaluation - the Pap nurse in general practice program.

    PubMed

    Hallinan, Christine M

    2010-01-01

    In this paper, program logic will be used to 'map out' the planning, development and evaluation of the general practice Pap nurse program in the Australian general practice arena. The incorporation of program logic into the evaluative process supports a greater appreciation of the theoretical assumptions and external influences that underpin general practice Pap nurse activity. The creation of a program logic model is a conscious strategy that results an explicit understanding of the challenges ahead, the resources available and time frames for outcomes. Program logic also enables a recognition that all players in the general practice arena need to be acknowledged by policy makers, bureaucrats and program designers when addressing through policy, issues relating to equity and accessibility of health initiatives. Logic modelling allows decision makers to consider the complexities of causal associations when developing health care proposals and programs. It enables the Pap nurse in general practice program to be represented diagrammatically by linking outcomes (short, medium and long term) with both the program activities and program assumptions. The research methodology used in the evaluation of the Pap nurse in general practice program includes a descriptive study design and the incorporation of program logic, with a retrospective analysis of Australian data from 2001 to 2009. For the purposes of gaining both empirical and contextual data for this paper, a data set analysis and literature review was performed. The application of program logic as an evaluative tool for analysis of the Pap PN incentive program facilitates a greater understanding of complex general practice activity triggers, and also allows this greater understanding to be incorporated into policy to facilitate Pap PN activity, increase general practice cervical smear and ultimately decrease burden of disease.

  3. Electronic logic to enhance switch reliability in detecting openings and closures of redundant switches

    DOEpatents

    Cooper, James A.

    1986-01-01

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and failsafe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  4. Electronic logic for enhanced switch reliability

    DOEpatents

    Cooper, J.A.

    1984-01-20

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  5. Use of program logic models in the Southern Rural Access Program evaluation.

    PubMed

    Pathman, Donald; Thaker, Samruddhi; Ricketts, Thomas C; Albright, Jennifer B

    2003-01-01

    The Southern Rural Access Program (SRAP) evaluation team used program logic models to clarify grantees' activities, objectives, and timelines. This information was used to benchmark data from grantees' progress reports to assess the program's successes. This article presents a brief background on the use of program logic models--essentially charts or diagrams specifying a program's planned activities, objectives, and goals--for evaluating and managing a program. It discusses the structure of the logic models chosen for the SRAP and how the model concept was introduced to the grantees to promote acceptance and use of the models. The article describes how the models helped clarify the program's objectives and helped lead agencies plan and manage the many program initiatives and subcontractors in their states. Models also provided a framework for grantees to report their progress to the National Program Office and evaluators and promoted the evaluators' visibility and acceptance by the grantees. Program logics, however, increased grantees' reporting requirements and demanded substantial time of the evaluators. Program logic models, on balance, proved their merit in the SRAP through their contributions to its management and evaluation and by providing a better understanding of the program's initiatives, successes, and potential impact.

  6. Fuzzy Logic Engine

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna

    2005-01-01

    The Fuzzy Logic Engine is a software package that enables users to embed fuzzy-logic modules into their application programs. Fuzzy logic is useful as a means of formulating human expert knowledge and translating it into software to solve problems. Fuzzy logic provides flexibility for modeling relationships between input and output information and is distinguished by its robustness with respect to noise and variations in system parameters. In addition, linguistic fuzzy sets and conditional statements allow systems to make decisions based on imprecise and incomplete information. The user of the Fuzzy Logic Engine need not be an expert in fuzzy logic: it suffices to have a basic understanding of how linguistic rules can be applied to the user's problem. The Fuzzy Logic Engine is divided into two modules: (1) a graphical-interface software tool for creating linguistic fuzzy sets and conditional statements and (2) a fuzzy-logic software library for embedding fuzzy processing capability into current application programs. The graphical- interface tool was developed using the Tcl/Tk programming language. The fuzzy-logic software library was written in the C programming language.

  7. The effectiveness of web-programming module based on scientific approach to train logical thinking ability for students in vocational high school

    NASA Astrophysics Data System (ADS)

    Nashiroh, Putri Khoirin; Kamdi, Waras; Elmunsyah, Hakkun

    2017-09-01

    Web programming is a basic subject in Computer and Informatics Engineering, a program study in a vocational high school. It requires logical thinking ability in its learning activities. The purposes of this research were (1) to develop a web programming module that implement scientific approach that can improve logical thinking ability for students in vocational high school; and (2) to test the effectiveness of web programming module based on scientific approach to train students' logical thinking ability. The results of this research was a web-programming module that apply scientific approach for learning activities to improve logical thinking ability of students in the vocational high school. The results of the effectiveness test of web-programming module give conclusion that it was very effective to train logical thinking ability and to improve learning result, this conclusion was supported by: (1) the average of posttest result of students exceeds the minimum criterion value, it was 79.91; (2) the average percentage of students' logical thinking score is 82,98; and (3) the average percentage of students' responses to the web programming module was 81.86%.

  8. An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem

    NASA Technical Reports Server (NTRS)

    Hosheleva, Olga

    1997-01-01

    How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.

  9. Defining, illustrating and reflecting on logic analysis with an example from a professional development program.

    PubMed

    Tremblay, Marie-Claude; Brousselle, Astrid; Richard, Lucie; Beaudet, Nicole

    2013-10-01

    Program designers and evaluators should make a point of testing the validity of a program's intervention theory before investing either in implementation or in any type of evaluation. In this context, logic analysis can be a particularly useful option, since it can be used to test the plausibility of a program's intervention theory using scientific knowledge. Professional development in public health is one field among several that would truly benefit from logic analysis, as it appears to be generally lacking in theorization and evaluation. This article presents the application of this analysis method to an innovative public health professional development program, the Health Promotion Laboratory. More specifically, this paper aims to (1) define the logic analysis approach and differentiate it from similar evaluative methods; (2) illustrate the application of this method by a concrete example (logic analysis of a professional development program); and (3) reflect on the requirements of each phase of logic analysis, as well as on the advantages and disadvantages of such an evaluation method. Using logic analysis to evaluate the Health Promotion Laboratory showed that, generally speaking, the program's intervention theory appeared to have been well designed. By testing and critically discussing logic analysis, this article also contributes to further improving and clarifying the method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Discovering Knowledge from Noisy Databases Using Genetic Programming.

    ERIC Educational Resources Information Center

    Wong, Man Leung; Leung, Kwong Sak; Cheng, Jack C. Y.

    2000-01-01

    Presents a framework that combines Genetic Programming and Inductive Logic Programming, two approaches in data mining, to induce knowledge from noisy databases. The framework is based on a formalism of logic grammars and is implemented as a data mining system called LOGENPRO (Logic Grammar-based Genetic Programming System). (Contains 34…

  11. An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-11-01

    In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less

  12. Repressor logic modules assembled by rolling circle amplification platform to construct a set of logic gates

    PubMed Central

    Wei, Hua; Hu, Bo; Tang, Suming; Zhao, Guojie; Guan, Yifu

    2016-01-01

    Small molecule metabolites and their allosterically regulated repressors play an important role in many gene expression and metabolic disorder processes. These natural sensors, though valuable as good logic switches, have rarely been employed without transcription machinery in cells. Here, two pairs of repressors, which function in opposite ways, were cloned, purified and used to control DNA replication in rolling circle amplification (RCA) in vitro. By using metabolites and repressors as inputs, RCA signals as outputs, four basic logic modules were constructed successfully. To achieve various logic computations based on these basic modules, we designed series and parallel strategies of circular templates, which can further assemble these repressor modules in an RCA platform to realize twelve two-input Boolean logic gates and a three-input logic gate. The RCA-output and RCA-assembled platform was proved to be easy and flexible for complex logic processes and might have application potential in molecular computing and synthetic biology. PMID:27869177

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed amore » new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.« less

  14. Jeagle: a JAVA Runtime Verification Tool

    NASA Technical Reports Server (NTRS)

    DAmorim, Marcelo; Havelund, Klaus

    2005-01-01

    We introduce the temporal logic Jeagle and its supporting tool for runtime verification of Java programs. A monitor for an Jeagle formula checks if a finite trace of program events satisfies the formula. Jeagle is a programming oriented extension of the rule-based powerful Eagle logic that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace. Jeagle extends Eagle with constructs for capturing parameterized program events such as method calls and method returns. Parameters can be the objects that methods are called upon, arguments to methods, and return values. Jeagle allows one to refer to these in formulas. The tool performs automated program instrumentation using AspectJ. We show the transformational semantics of Jeagle.

  15. Logic Programming: PROLOG.

    ERIC Educational Resources Information Center

    Lopez, Antonio M., Jr.

    1989-01-01

    Provides background material on logic programing and presents PROLOG as a high-level artificial intelligence programing language that borrows its basic constructs from logic. Suggests the language is one which will help the educator to achieve various goals, particularly the promotion of problem solving ability. (MVL)

  16. Array processor architecture connection network

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1982-01-01

    A connection network is disclosed for use between a parallel array of processors and a parallel array of memory modules for establishing non-conflicting data communications paths between requested memory modules and requesting processors. The connection network includes a plurality of switching elements interposed between the processor array and the memory modules array in an Omega networking architecture. Each switching element includes a first and a second processor side port, a first and a second memory module side port, and control logic circuitry for providing data connections between the first and second processor ports and the first and second memory module ports. The control logic circuitry includes strobe logic for examining data arriving at the first and the second processor ports to indicate when the data arriving is requesting data from a requesting processor to a requested memory module. Further, connection circuitry is associated with the strobe logic for examining requesting data arriving at the first and the second processor ports for providing a data connection therefrom to the first and the second memory module ports in response thereto when the data connection so provided does not conflict with a pre-established data connection currently in use.

  17. CADAT network translator

    NASA Technical Reports Server (NTRS)

    Pitts, E. R.

    1981-01-01

    Program converts cell-net data into logic-gate models for use in test and simulation programs. Input consists of either Place, Route, and Fold (PRF) or Place-and-Route-in-Two-Dimensions (PR2D) layout data deck. Output consists of either Test Pattern Generator (TPG) or Logic-Simulation (LOGSIM) logic circuitry data deck. Designer needs to build only logic-gate-model circuit description since program acts as translator. Language is FORTRAN IV.

  18. Parallel Transport Quantum Logic Gates with Trapped Ions.

    PubMed

    de Clercq, Ludwig E; Lo, Hsiang-Yu; Marinelli, Matteo; Nadlinger, David; Oswald, Robin; Negnevitsky, Vlad; Kienzler, Daniel; Keitch, Ben; Home, Jonathan P

    2016-02-26

    We demonstrate single-qubit operations by transporting a beryllium ion with a controlled velocity through a stationary laser beam. We use these to perform coherent sequences of quantum operations, and to perform parallel quantum logic gates on two ions in different processing zones of a multiplexed ion trap chip using a single recycled laser beam. For the latter, we demonstrate individually addressed single-qubit gates by local control of the speed of each ion. The fidelities we observe are consistent with operations performed using standard methods involving static ions and pulsed laser fields. This work therefore provides a path to scalable ion trap quantum computing with reduced requirements on the optical control complexity.

  19. Logic operations based on magnetic-vortex-state networks.

    PubMed

    Jung, Hyunsung; Choi, Youn-Seok; Lee, Ki-Suk; Han, Dong-Soo; Yu, Young-Sang; Im, Mi-Young; Fischer, Peter; Kim, Sang-Koog

    2012-05-22

    Logic operations based on coupled magnetic vortices were experimentally demonstrated. We utilized a simple chain structure consisting of three physically separated but dipolar-coupled vortex-state Permalloy disks as well as two electrodes for application of the logical inputs. We directly monitored the vortex gyrations in the middle disk, as the logical output, by time-resolved full-field soft X-ray microscopy measurements. By manipulating the relative polarization configurations of both end disks, two different logic operations are programmable: the XOR operation for the parallel polarization and the OR operation for the antiparallel polarization. This work paves the way for new-type programmable logic gates based on the coupled vortex-gyration dynamics achievable in vortex-state networks. The advantages are as follows: a low-power input signal by means of resonant vortex excitation, low-energy dissipation during signal transportation by selection of low-damping materials, and a simple patterned-array structure.

  20. Research on teacher education programs: logic model approach.

    PubMed

    Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M

    2013-02-01

    Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Trinary Associative Memory Would Recognize Machine Parts

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Awwal, Abdul Ahad S.; Karim, Mohammad A.

    1991-01-01

    Trinary associative memory combines merits and overcomes major deficiencies of unipolar and bipolar logics by combining them in three-valued logic that reverts to unipolar or bipolar binary selectively, as needed to perform specific tasks. Advantage of associative memory: one obtains access to all parts of it simultaneously on basis of content, rather than address, of data. Consequently, used to exploit fully parallelism and speed of optical computing.

  2. Three-input majority logic gate and multiple input logic circuit based on DNA strand displacement.

    PubMed

    Li, Wei; Yang, Yang; Yan, Hao; Liu, Yan

    2013-06-12

    In biomolecular programming, the properties of biomolecules such as proteins and nucleic acids are harnessed for computational purposes. The field has gained considerable attention due to the possibility of exploiting the massive parallelism that is inherent in natural systems to solve computational problems. DNA has already been used to build complex molecular circuits, where the basic building blocks are logic gates that produce single outputs from one or more logical inputs. We designed and experimentally realized a three-input majority gate based on DNA strand displacement. One of the key features of a three-input majority gate is that the three inputs have equal priority, and the output will be true if any of the two inputs are true. Our design consists of a central, circular DNA strand with three unique domains between which are identical joint sequences. Before inputs are introduced to the system, each domain and half of each joint is protected by one complementary ssDNA that displays a toehold for subsequent displacement by the corresponding input. With this design the relationship between any two domains is analogous to the relationship between inputs in a majority gate. Displacing two or more of the protection strands will expose at least one complete joint and return a true output; displacing none or only one of the protection strands will not expose a complete joint and will return a false output. Further, we designed and realized a complex five-input logic gate based on the majority gate described here. By controlling two of the five inputs the complex gate can realize every combination of OR and AND gates of the other three inputs.

  3. 2013/2014 Eco-Logical program annual report

    DOT National Transportation Integrated Search

    2014-12-01

    The Eco-Logical approach offers an ecosystem-based framework for integrated infrastructure and natural resource planning, project development, and delivery. The 2013/2014 Eco-Logical Program Annual Report provides updates on the Federal Highway Admin...

  4. A Logic Programming Testbed for Inductive Thought and Specification.

    ERIC Educational Resources Information Center

    Neff, Norman D.

    This paper describes applications of logic programming technology to the teaching of the inductive method in computer science and mathematics. It discusses the nature of inductive thought and its place in those fields of inquiry, arguing that a complete logic programming system for supporting inductive inference is not only feasible but necessary.…

  5. Photonic content-addressable memory system that uses a parallel-readout optical disk

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.

    1995-11-01

    We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.

  6. Magnetic tunnel junction based spintronic logic devices

    NASA Astrophysics Data System (ADS)

    Lyle, Andrew Paul

    The International Technology Roadmap for Semiconductors (ITRS) predicts that complimentary metal oxide semiconductor (CMOS) based technologies will hit their last generation on or near the 16 nm node, which we expect to reach by the year 2025. Thus future advances in computational power will not be realized from ever-shrinking device sizes, but rather by 'outside the box' designs and new physics, including molecular or DNA based computation, organics, magnonics, or spintronic. This dissertation investigates magnetic logic devices for post-CMOS computation. Three different architectures were studied, each relying on a different magnetic mechanism to compute logic functions. Each design has it benefits and challenges that must be overcome. This dissertation focuses on pushing each design from the drawing board to a realistic logic technology. The first logic architecture is based on electrically connected magnetic tunnel junctions (MTJs) that allow direct communication between elements without intermediate sensing amplifiers. Two and three input logic gates, which consist of two and three MTJs connected in parallel, respectively were fabricated and are compared. The direct communication is realized by electrically connecting the output in series with the input and applying voltage across the series connections. The logic gates rely on the fact that a change in resistance at the input modulates the voltage that is needed to supply the critical current for spin transfer torque switching the output. The change in resistance at the input resulted in a voltage margin of 50--200 mV and 250--300 mV for the closest input states for the three and two input designs, respectively. The two input logic gate realizes the AND, NAND, NOR, and OR logic functions. The three input logic function realizes the Majority, AND, NAND, NOR, and OR logic operations. The second logic architecture utilizes magnetostatically coupled nanomagnets to compute logic functions, which is the basis of Magnetic Quantum Cellular Automata (MQCA). MQCA has the potential to be thousands of times more energy efficient than CMOS technology. While interesting, these systems are academic unless they can be interfaced into current technologies. This dissertation pushed past a major hurdle by experimentally demonstrating a spintronic input/output (I/O) interface for the magnetostatically coupled nanomagnets by incorporating MTJs. This spintronic interface allows individual nanomagnets to be programmed using spin transfer torque and read using magneto resistance structure. Additionally the spintronic interface allows statistical data on the reliability of the magnetic coupling utilized for data propagation to be easily measured. The integration of spintronics and MQCA for an electrical interface to achieve a magnetic logic device with low power creates a competitive post-CMOS logic device. The final logic architecture that was studied used MTJs to compute logic functions and magnetic domain walls to communicate between gates. Simulations were used to optimize the design of this architecture. Spin transfer torque was used to compute logic function at each MTJ gate and was used to drive the domain walls. The design demonstrated that multiple nanochannels could be connected to each MTJ to realize fan-out from the logic gates. As a result this logic scheme eliminates the need for intermediate reads and conversions to pass information from one logic gate to another.

  7. Complementary spin transistor using a quantum well channel.

    PubMed

    Park, Youn Ho; Choi, Jun Woo; Kim, Hyung-Jun; Chang, Joonyeon; Han, Suk Hee; Choi, Heon-Jin; Koo, Hyun Cheol

    2017-04-20

    In order to utilize the spin field effect transistor in logic applications, the development of two types of complementary transistors, which play roles of the n- and p-type conventional charge transistors, is an essential prerequisite. In this research, we demonstrate complementary spin transistors consisting of two types of devices, namely parallel and antiparallel spin transistors using InAs based quantum well channels and exchange-biased ferromagnetic electrodes. In these spin transistors, the magnetization directions of the source and drain electrodes are parallel or antiparallel, respectively, depending on the exchange bias field direction. Using this scheme, we also realize a complementary logic operation purely with spin transistors controlled by the gate voltage, without any additional n- or p-channel transistor.

  8. A sample theory-based logic model to improve program development, implementation, and sustainability of Farm to School programs.

    PubMed

    Ratcliffe, Michelle M

    2012-08-01

    Farm to School programs hold promise to address childhood obesity. These programs may increase students’ access to healthier foods, increase students’ knowledge of and desire to eat these foods, and increase their consumption of them. Implementing Farm to School programs requires the involvement of multiple people, including nutrition services, educators, and food producers. Because these groups have not traditionally worked together and each has different goals, it is important to demonstrate how Farm to School programs that are designed to decrease childhood obesity may also address others’ objectives, such as academic achievement and economic development. A logic model is an effective tool to help articulate a shared vision for how Farm to School programs may work to accomplish multiple goals. Furthermore, there is evidence that programs based on theory are more likely to be effective at changing individuals’ behaviors. Logic models based on theory may help to explain how a program works, aid in efficient and sustained implementation, and support the development of a coherent evaluation plan. This article presents a sample theory-based logic model for Farm to School programs. The presented logic model is informed by the polytheoretical model for food and garden-based education in school settings (PMFGBE). The logic model has been applied to multiple settings, including Farm to School program development and evaluation in urban and rural school districts. This article also includes a brief discussion on the development of the PMFGBE, a detailed explanation of how Farm to School programs may enhance the curricular, physical, and social learning environments of schools, and suggestions for the applicability of the logic model for practitioners, researchers, and policy makers.

  9. A Framework for Understanding Community Colleges' Organizational Capacity for Data Use: A Convergent Parallel Mixed Methods Study

    ERIC Educational Resources Information Center

    Kerrigan, Monica Reid

    2014-01-01

    This convergent parallel design mixed methods case study of four community colleges explores the relationship between organizational capacity and implementation of data-driven decision making (DDDM). The article also illustrates purposive sampling using replication logic for cross-case analysis and the strengths and weaknesses of quantitizing…

  10. Procedural and Logic Programming: A Comparison.

    ERIC Educational Resources Information Center

    Watkins, Will; And Others

    1988-01-01

    Examines the similarities and fundamental differences between procedural programing and logic programing by comparing LogoWriter and PROLOG. Suggests that PROLOG may be a good first programing language for students to learn. (MVL)

  11. The Effects of Learning a Computer Programming Language on the Logical Reasoning of School Children.

    ERIC Educational Resources Information Center

    Seidman, Robert H.

    The research reported in this paper explores the syntactical and semantic link between computer programming statements and logical principles, and addresses the effects of learning a programming language on logical reasoning ability. Fifth grade students in a public school in Syracuse, New York, were randomly selected as subjects, and then…

  12. Development of a Logic Model to Guide Evaluations of the ASCA National Model for School Counseling Programs

    ERIC Educational Resources Information Center

    Martin, Ian; Carey, John

    2014-01-01

    A logic model was developed based on an analysis of the 2012 American School Counselor Association (ASCA) National Model in order to provide direction for program evaluation initiatives. The logic model identified three outcomes (increased student achievement/gap reduction, increased school counseling program resources, and systemic change and…

  13. Development of a program logic model and evaluation plan for a participatory ergonomics intervention in construction.

    PubMed

    Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley

    2014-03-01

    Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. © 2013 Wiley Periodicals, Inc.

  14. Development of a Program Logic Model and Evaluation Plan for a Participatory Ergonomics Intervention in Construction

    PubMed Central

    Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley

    2013-01-01

    Background Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. Methods In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. Results The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Conclusions Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. PMID:24006097

  15. Parsing with logical variables (logic-based programming systems)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finin, T.W.; Stone Palmer, M.

    1983-01-01

    Logic based programming systems have enjoyed an increasing popularity in applied AI work in the last few years. One of the contributions to computational linguistics made by the logic programming paradigm has been the definite clause grammar. In comparing DCGS with previous parsing mechanisms such as ATNS, certain clear advantages are seen. The authors feel that the most important of these advantages are due to the use of logical variables with unification as the fundamental operation on them. To illustrate the power of the logical variable, they have implemented an experimental atn system which treats atn registers as logical variablesmore » and provides a unification operation over them. They aim to simultaneously encourage the use of the powerful mechanisms available in DCGS and demonstrate that some of these techniques can be captured without reference to a resolution theorem prover. 14 references.« less

  16. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  17. An Efficient Fuzzy Controller Design for Parallel Connected Induction Motor Drives

    NASA Astrophysics Data System (ADS)

    Usha, S.; Subramani, C.

    2018-04-01

    Generally, an induction motors are highly non-linear and has a complex time varying dynamics. This makes the speed control of an induction motor a challenging issue in the industries. But, due to the recent trends in the power electronic devices and intelligent controllers, the speed control of the induction motor is achieved by including non-linear characteristics also. Conventionally a single inverter is used to run one induction motor in industries. In the traction applications, two or more inductions motors are operated in parallel to reduce the size and cost of induction motors. In this application, the parallel connected induction motors can be driven by a single inverter unit. The stability problems may introduce in the parallel operation under low speed operating conditions. Hence, the speed deviations should be reduce with help of suitable controllers. The speed control of the parallel connected system is performed by PID controller and fuzzy logic controller. In this paper the speed response of the induction motor for the rating of IHP, 1440 rpm, and 50Hz with these controller are compared in time domain specifications. The stability analysis of the system also performed under low speed using matlab platform. The hardware model is developed for speed control using fuzzy logic controller which exhibited superior performances over the other controller.

  18. An interval logic for higher-level temporal reasoning

    NASA Technical Reports Server (NTRS)

    Schwartz, R. L.; Melliar-Smith, P. M.; Vogt, F. H.; Plaisted, D. A.

    1983-01-01

    Prior work explored temporal logics, based on classical modal logics, as a framework for specifying and reasoning about concurrent programs, distributed systems, and communications protocols, and reported on efforts using temporal reasoning primitives to express very high level abstract requirements that a program or system is to satisfy. Based on experience with those primitives, this report describes an Interval Logic that is more suitable for expressing such higher level temporal properties. The report provides a formal semantics for the Interval Logic, and several examples of its use. A description of decision procedures for the logic is also included.

  19. Logic gates realized by nonvolatile GeTe/Sb2Te3 super lattice phase-change memory with a magnetic field input

    NASA Astrophysics Data System (ADS)

    Lu, Bin; Cheng, Xiaomin; Feng, Jinlong; Guan, Xiawei; Miao, Xiangshui

    2016-07-01

    Nonvolatile memory devices or circuits that can implement both storage and calculation are a crucial requirement for the efficiency improvement of modern computer. In this work, we realize logic functions by using [GeTe/Sb2Te3]n super lattice phase change memory (PCM) cell in which higher threshold voltage is needed for phase change with a magnetic field applied. First, the [GeTe/Sb2Te3]n super lattice cells were fabricated and the R-V curve was measured. Then we designed the logic circuits with the super lattice PCM cell verified by HSPICE simulation and experiments. Seven basic logic functions are first demonstrated in this letter; then several multi-input logic gates are presented. The proposed logic devices offer the advantages of simple structures and low power consumption, indicating that the super lattice PCM has the potential in the future nonvolatile central processing unit design, facilitating the development of massive parallel computing architecture.

  20. Computer programming in the UK undergraduate mathematics curriculum

    NASA Astrophysics Data System (ADS)

    Sangwin, Christopher J.; O'Toole, Claire

    2017-11-01

    This paper reports a study which investigated the extent to which undergraduate mathematics students in the United Kingdom are currently taught to programme a computer as a core part of their mathematics degree programme. We undertook an online survey, with significant follow-up correspondence, to gather data on current curricula and received replies from 46 (63%) of the departments who teach a BSc mathematics degree. We found that 78% of BSc degree courses in mathematics included computer programming in a compulsory module but 11% of mathematics degree programmes do not teach programming to all their undergraduate mathematics students. In 2016, programming is most commonly taught to undergraduate mathematics students through imperative languages, notably MATLAB, using numerical analysis as the underlying (or parallel) mathematical subject matter. Statistics is a very popular choice in optional courses, using the package R. Computer algebra systems appear to be significantly less popular for compulsory first-year courses than a decade ago, and there was no mention of logic programming, functional programming or automatic theorem proving software. The modal form of assessment of computing modules is entirely by coursework (i.e. no examination).

  1. Assessment of Evidence-based Management Training Program: Application of a Logic Model.

    PubMed

    Guo, Ruiling; Farnsworth, Tracy J; Hermanson, Patrick M

    2016-06-01

    The purposes of this study were to apply a logic model to plan and implement an evidence-based management (EBMgt) educational training program for healthcare administrators and to examine whether a logic model is a useful tool for evaluating the outcomes of the educational program. The logic model was used as a conceptual framework to guide the investigators in developing an EBMgt educational training program and evaluating the outcomes of the program. The major components of the logic model were constructed as inputs, outputs, and outcomes/impacts. The investigators delineated the logic model based on the results of the needs assessment survey. Two 3-hour training workshops were delivered to 30 participants. To assess the outcomes of the EBMgt educational program, pre- and post-tests and self-reflection surveys were conducted. The data were collected and analyzed descriptively and inferentially, using the IBM Statistical Package for the Social Sciences (SPSS) 22.0. A paired sample t-test was performed to compare the differences in participants' EBMgt knowledge and skills prior to and after the training. The assessment results showed that there was a statistically significant difference in participants' EBMgt knowledge and information searching skills before and after the training (p< 0.001). Participants' confidence in using the EBMgt approach for decision-making was significantly increased after the training workshops (p< 0.001). Eighty-three percent of participants indicated that the knowledge and skills they gained through the training program could be used for future management decision-making in their healthcare organizations. The overall evaluation results of the program were positive. It is suggested that the logic model is a useful tool for program planning, implementation, and evaluation, and it also improves the outcomes of the educational program.

  2. Implementation of digital equality comparator circuit on memristive memory crossbar array using material implication logic

    NASA Astrophysics Data System (ADS)

    Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al

    2018-03-01

    One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.

  3. Design and simulation of programmable relational optoelectronic time-pulse coded processors as base elements for sorting neural networks

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2010-05-01

    In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.

  4. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  5. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  6. Logic via Computer Programming.

    ERIC Educational Resources Information Center

    Wieschenberg, Agnes A.

    This paper proposed the question "How do we teach logical thinking and sophisticated mathematics to unsophisticated college students?" One answer among many is through the writing of computer programs. The writing of computer algorithms is mathematical problem solving and logic in disguise and it may attract students who would otherwise stop…

  7. Chip architecture - A revolution brewing

    NASA Astrophysics Data System (ADS)

    Guterl, F.

    1983-07-01

    Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.

  8. Control and protection system for paralleled modular static inverter-converter systems

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.; Gourash, F.

    1973-01-01

    A control and protection system was developed for use with a paralleled 2.5-kWe-per-module static inverter-converter system. The control and protection system senses internal and external fault parameters such as voltage, frequency, current, and paralleling current unbalance. A logic system controls contactors to isolate defective power conditioners or loads. The system sequences contactor operation to automatically control parallel operation, startup, and fault isolation. Transient overload protection and fault checking sequences are included. The operation and performance of a control and protection system, with detailed circuit descriptions, are presented.

  9. The importance of context in logic model construction for a multi-site community-based Aboriginal driver licensing program.

    PubMed

    Cullen, Patricia; Clapham, Kathleen; Byrne, Jake; Hunter, Kate; Senserrick, Teresa; Keay, Lisa; Ivers, Rebecca

    2016-08-01

    Evidence indicates that Aboriginal people are underrepresented among driver licence holders in New South Wales, which has been attributed to licensing barriers for Aboriginal people. The Driving Change program was developed to provide culturally responsive licensing services that engage Aboriginal communities and build local capacity. This paper outlines the formative evaluation of the program, including logic model construction and exploration of contextual factors. Purposive sampling was used to identify key informants (n=12) from a consultative committee of key stakeholders and program staff. Semi-structured interviews were transcribed and thematically analysed. Data from interviews informed development of the logic model. Participants demonstrated high level of support for the program and reported that it filled an important gap. The program context revealed systemic barriers to licensing that were correspondingly targeted by specific program outputs in the logic model. Addressing underlying assumptions of the program involved managing local capacity and support to strengthen implementation. This formative evaluation highlights the importance of exploring program context as a crucial first step in logic model construction. The consultation process assisted in clarifying program goals and ensuring that the program was responding to underlying systemic factors that contribute to inequitable licensing access for Aboriginal people. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James

    2000-01-01

    The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.

  11. C code generation from Petri-net-based logic controller specification

    NASA Astrophysics Data System (ADS)

    Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei

    2017-08-01

    The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.

  12. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  13. SITE PROGRAM DEMONSTRATION ECO LOGIC INTERNATIONAL GAS-PHASE CHEMICAL REDUCTION PROCESS, BAY CITY, MICHIGAN TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    The SITE Program funded a field demonstration to evaluate the Eco Logic Gas-Phase Chemical Reduction Process developed by ELI Eco Logic International Inc. (ELI), Ontario, Canada. The Demonstration took place at the Middleground Landfill in Bay City, Michigan using landfill wa...

  14. The Application of Logic Programming to Communication Education.

    ERIC Educational Resources Information Center

    Sanford, David L.

    Recommending that communication students be required to learn to use computers not merely as number crunchers, word processors, data bases, and graphics generators, but also as logical inference makers, this paper examines the recently developed technology of logical programing in computer languages. It presents two syllogisms and shows how they…

  15. 77 FR 35107 - Petition for Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ... devices. CSX requests relief from 49 CFR 236.109 as it applies to variable timers within the program logic... program logic of the operating software. However, CSX notes that some microprocessor-based equipment have.../check sum/universal control number of the existing location specific application logic to the previously...

  16. A logic-based method for integer programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooker, J.; Natraj, N.R.

    1994-12-31

    We propose a logic-based approach to integer programming that replaces traditional branch-and-cut techniques with logical analogs. Integer variables are regarded as atomic propositions. The constraints give rise to logical formulas that are analogous to separating cuts. No continuous relaxation is used. Rather, the cuts are selected so that they can be easily solved as a discrete relaxation. (In fact, defining a relaxation and generating cuts are best seen as the same problem.) We experiment with relaxations that have a k-tree structure and can be solved by nonserial dynamic programming. We also present logic-based analogs of facet-defining cuts, Chv{acute a}tal rank,more » etc. We conclude with some preliminary computational results.« less

  17. Logic design for dynamic and interactive recovery.

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Jessep, D. C.; Wadia, A. B.; Schneider, P. R.; Bouricius, W. G.

    1971-01-01

    Recovery in a fault-tolerant computer means the continuation of system operation with data integrity after an error occurs. This paper delineates two parallel concepts embodied in the hardware and software functions required for recovery; detection, diagnosis, and reconfiguration for hardware, data integrity, checkpointing, and restart for the software. The hardware relies on the recovery variable set, checking circuits, and diagnostics, and the software relies on the recovery information set, audit, and reconstruct routines, to characterize the system state and assist in recovery when required. Of particular utility is a handware unit, the recovery control unit, which serves as an interface between error detection and software recovery programs in the supervisor and provides dynamic interactive recovery.

  18. Fostering and Inspiring Research Engagement (FIRE): program logic of a research incubator scheme for allied health students.

    PubMed

    Ziviani, Jenny; Feeney, Rachel; Schabrun, Siobhan; Copland, David; Hodges, Paul

    2014-08-01

    The purpose of this study was to present the application of a logic model in depicting the underlying theory of an undergraduate research scheme for occupational therapy, physiotherapy, and speech pathology university students in Queensland, Australia. Data gathered from key written documents on the goals and intended operation of the research incubator scheme were used to create a draft (unverified) logic model. The major components of the logic model were inputs and resources, activities/outputs, and outcomes (immediate/learning, intermediate/action, and longer term/impacts). Although immediate and intermediate outcomes chiefly pertained to students' participation in honours programs, longer-term outcomes (impacts) concerned their subsequent participation in research higher-degree programs and engagement in research careers. Program logic provided an effective means of clarifying program objectives and the mechanisms by which the research incubator scheme was designed to achieve its intended outcomes. This model was developed as the basis for evaluation of the effectiveness of the scheme in achieving its stated goals.

  19. A hybrid nanomemristor/transistor logic circuit capable of self-programming

    PubMed Central

    Borghetti, Julien; Li, Zhiyong; Straznicky, Joseph; Li, Xuema; Ohlberg, Douglas A. A.; Wu, Wei; Stewart, Duncan R.; Williams, R. Stanley

    2009-01-01

    Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing. PMID:19171903

  20. A hybrid nanomemristor/transistor logic circuit capable of self-programming.

    PubMed

    Borghetti, Julien; Li, Zhiyong; Straznicky, Joseph; Li, Xuema; Ohlberg, Douglas A A; Wu, Wei; Stewart, Duncan R; Williams, R Stanley

    2009-02-10

    Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing.

  1. Conceptual Modeling via Logic Programming

    DTIC Science & Technology

    1990-01-01

    Define User Interface and Query Language L i1W= Ltl k.l 4. Define Procedures for Specifying Output S . Select Logic Programming Language 6. Develop ...baseline s change model. sessions and baselines. It was changed 6. Develop Methodology for C 31 Users. considerably with the advent of the window This...Model Development : Implica- for Conceptual Modeling Via Logic tions for Communications of a Cognitive Programming. Marina del Rey, Calif.: Analysis of

  2. Logic Models in Out-of-School Time Programs: What Are They and Why Are They Important? Research-to-Results Brief. Publication #2007-01

    ERIC Educational Resources Information Center

    Hamilton, Jenny; Bronte-Tinkew, Jacinta

    2007-01-01

    A logic model, also called a conceptual model and theory-of-change model, is a visual representation of how a program is expected to "work." It relates resources, activities, and the intended changes or impacts that a program is expected to create. Typically, logic models are diagrams or flow charts with illustrations, text, and arrows that…

  3. Development of a Logic Model for a Physical Activity–Based Employee Wellness Program for Mass Transit Workers

    PubMed Central

    Petruzzello, Steven J.; Ryan, Katherine E.

    2014-01-01

    Transportation workers, who constitute a large sector of the workforce, have worksite factors that harm their health. Worksite wellness programs must target this at-risk population. Although physical activity is often a component of worksite wellness logic models, we consider it the cornerstone for improving the health of mass transit employees. Program theory was based on in-person interviews and focus groups of employees. We identified 4 short-term outcome categories, which provided a chain of responses based on the program activities that should lead to the desired end results. This logic model may have significant public health impact, because it can serve as a framework for other US mass transit districts and worksite populations that face similar barriers to wellness, including truck drivers, railroad employees, and pilots. The objective of this article is to discuss the development of a logic model for a physical activity–based mass-transit employee wellness program by describing the target population, program theory, the components of the logic model, and the process of its development. PMID:25032838

  4. Development of a logic model for a physical activity-based employee wellness program for mass transit workers.

    PubMed

    Das, Bhibha M; Petruzzello, Steven J; Ryan, Katherine E

    2014-07-17

    Transportation workers, who constitute a large sector of the workforce, have worksite factors that harm their health. Worksite wellness programs must target this at-risk population. Although physical activity is often a component of worksite wellness logic models, we consider it the cornerstone for improving the health of mass transit employees. Program theory was based on in-person interviews and focus groups of employees. We identified 4 short-term outcome categories, which provided a chain of responses based on the program activities that should lead to the desired end results. This logic model may have significant public health impact, because it can serve as a framework for other US mass transit districts and worksite populations that face similar barriers to wellness, including truck drivers, railroad employees, and pilots. The objective of this article is to discuss the development of a logic model for a physical activity-based mass-transit employee wellness program by describing the target population, program theory, the components of the logic model, and the process of its development.

  5. THRESHOLD LOGIC IN ARTIFICIAL INTELLIGENCE

    DTIC Science & Technology

    COMPUTER LOGIC, ARTIFICIAL INTELLIGENCE , BIONICS, GEOMETRY, INPUT OUTPUT DEVICES, LINEAR PROGRAMMING, MATHEMATICAL LOGIC, MATHEMATICAL PREDICTION, NETWORKS, PATTERN RECOGNITION, PROBABILITY, SWITCHING CIRCUITS, SYNTHESIS

  6. Adding Resistances and Capacitances in Introductory Electricity

    NASA Astrophysics Data System (ADS)

    Efthimiou, C. J.; Llewellyn, R. A.

    2005-09-01

    All introductory physics textbooks, with or without calculus, cover the addition of both resistances and capacitances in series and in parallel as discrete summations. However, none includes problems that involve continuous versions of resistors in parallel or capacitors in series. This paper introduces a method for solving the continuous problems that is logical, straightforward, and within the mathematical preparation of students at the introductory level.

  7. A Novel Approach to Realize of All Optical Frequency Encoded Dibit Based XOR and XNOR Logic Gates Using Optical Switches with Simulated Verification

    NASA Astrophysics Data System (ADS)

    Ghosh, B.; Hazra, S.; Haldar, N.; Roy, D.; Patra, S. N.; Swarnakar, J.; Sarkar, P. P.; Mukhopadhyay, S.

    2018-03-01

    Since last few decades optics has already proved its strong potentiality for conducting parallel logic, arithmetic and algebraic operations due to its super-fast speed in communication and computation. So many different logical and sequential operations using all optical frequency encoding technique have been proposed by several authors. Here, we have keened out all optical dibit representation technique, which has the advantages of high speed operation as well as reducing the bit error problem. Exploiting this phenomenon, we have proposed all optical frequency encoded dibit based XOR and XNOR logic gates using the optical switches like add/drop multiplexer (ADM) and reflected semiconductor optical amplifier (RSOA). Also the operations of these gates have been verified through proper simulation using MATLAB (R2008a).

  8. Interpretation of Logical Words in Mandarin-Speaking Children with Autism Spectrum Disorders: Uncovering Knowledge of Semantics and Pragmatics.

    PubMed

    Su, Yi Esther; Su, Lin-Yan

    2015-07-01

    This study investigated the interpretation of the logical words 'some' and 'every…or…' in 4-15-year-old high-functioning Mandarin-speaking children with autism spectrum disorders (ASD). Children with ASD performed similarly to typical controls in demonstrating semantic knowledge of simple sentences with 'some', and they had delayed knowledge of the complex sentences with 'every…or…'. Interestingly, the children with ASD had pragmatic knowledge of the scalar implicatures of these logical words, parallel to those of the typical controls. Taken together, the interpretation of logical words may be a relative strength in children with ASD. It is possible that some aspects of semantics and pragmatics may be selectively spared in ASD, due to the contribution the language faculty makes to language acquisition in the ASD population.

  9. Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.

    PubMed

    Frommholz, Ingo; Roelleke, Thomas

    2016-01-01

    Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.

  10. Runtime Analysis of Linear Temporal Logic Specifications

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus

    2001-01-01

    This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  11. Serial DNA relay in DNA logic gates by electrical fusion and mechanical splitting of droplets

    PubMed Central

    Kawano, Ryuji; Takinoue, Masahiro; Osaki, Toshihisa; Kamiya, Koki; Miki, Norihisa

    2017-01-01

    DNA logic circuits utilizing DNA hybridization and/or enzymatic reactions have drawn increasing attention for their potential applications in the diagnosis and treatment of cellular diseases. The compartmentalization of such a system into a microdroplet considerably helps to precisely regulate local interactions and reactions between molecules. In this study, we introduced a relay approach for enabling the transfer of DNA from one droplet to another to implement multi-step sequential logic operations. We proposed electrical fusion and mechanical splitting of droplets to facilitate the DNA flow at the inputs, logic operation, output, and serial connection between two logic gates. We developed Negative-OR operations integrated by a serial connection of the OR gate and NOT gate incorporated in a series of droplets. The four types of input defined by the presence/absence of DNA in the input droplet pair were correctly reflected in the readout at the Negative-OR gate. The proposed approach potentially allows for serial and parallel logic operations that could be used for complex diagnostic applications. PMID:28700641

  12. Reflections on writing hydrologic reports

    USGS Publications Warehouse

    Olcott, Perry G.

    1987-01-01

    Reporting of scientific work should be characterized by a logical argument that is developed through presentation of the problem, tabulation and display of data pertinent to the problem , and testing and interpretation of the data to prove hypotheses that address the problem. Organization of the report is vital to developing this logical argument: it provides structure, continuity, logic, and emphasis to the presentation. Each part of the report serves a specific function and each is linked by a connecting logic, the logical argument of the report. Each scientific report normally has a title, table of contents, abstract, introduction, body (of the report), and summary and/or conclusions. Organization of sections within the body of the report is exactly parallel to overall organization; subjects presented in the section title are developed by logical subdivisions and pertinent discussion. The summary and/or conclusions section culminates the logical argument of the report by drawing together and quantitatively reiterating the principal conclusions developed in the discussion. Supplemental information on report content, background of the study, additional data or details on procedures, and other information of interest to the reader is presented in the foreward or preface, list of illustrations or tables, glossaries, and appendixes. (Lantz-PTT)

  13. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  14. Kip, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staley, Martin

    2017-09-20

    This high-performance ray tracing library provides very fast rendering; compact code; type flexibility through C++ "generic programming" techniques; and ease of use via an application programming interface (API) that operates independently of any GUI, on-screen display, or other enclosing application. Kip supports constructive solid geometry (CSG) models based on a wide variety of built-in shapes and logical operators, and also allows for user-defined shapes and operators to be provided. Additional features include basic texturing; input/output of models using a simple human-readable file format and with full error checking and detailed diagnostics; and support for shared data parallelism. Kip is writtenmore » in pure, ANSI standard C++; is entirely platform independent; and is very easy to use. As a C++ "header only" library, it requires no build system, configuration or installation scripts, wizards, non-C++ preprocessing, makefiles, shell scripts, or external libraries.« less

  15. Families of Graph Algorithms: SSSP Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-08-28

    Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less

  16. Program Theory Evaluation: Logic Analysis

    ERIC Educational Resources Information Center

    Brousselle, Astrid; Champagne, Francois

    2011-01-01

    Program theory evaluation, which has grown in use over the past 10 years, assesses whether a program is designed in such a way that it can achieve its intended outcomes. This article describes a particular type of program theory evaluation--logic analysis--that allows us to test the plausibility of a program's theory using scientific knowledge.…

  17. Parallel pumping for magnon spintronics: Amplification and manipulation of magnon spin currents on the micron-scale

    NASA Astrophysics Data System (ADS)

    Brächer, T.; Pirro, P.; Hillebrands, B.

    2017-06-01

    Magnonics and magnon spintronics aim at the utilization of spin waves and magnons, their quanta, for the construction of wave-based logic networks via the generation of pure all-magnon spin currents and their interfacing with electric charge transport. The promise of efficient parallel data processing and low power consumption renders this field one of the most promising research areas in spintronics. In this context, the process of parallel parametric amplification, i.e., the conversion of microwave photons into magnons at one half of the microwave frequency, has proven to be a versatile tool to excite and to manipulate spin waves. Its beneficial and unique properties such as frequency and mode-selectivity, the possibility to excite spin waves in a wide wavevector range and the creation of phase-correlated wave pairs, have enabled the achievement of important milestones like the magnon Bose-Einstein condensation and the cloning and trapping of spin-wave packets. Parallel parametric amplification, which allows for the selective amplification of magnons while conserving their phase is, thus, one of the key methods of spin-wave generation and amplification. The application of parallel parametric amplification to CMOS-compatible micro- and nano-structures is an important step towards the realization of magnonic networks. This is motivated not only by the fact that amplifiers are an important tool for the construction of any extended logic network but also by the unique properties of parallel parametric amplification. In particular, the creation of phase-correlated wave pairs allows for rewarding alternative logic operations such as a phase-dependent amplification of the incident waves. Recently, the successful application of parallel parametric amplification to metallic microstructures has been reported which constitutes an important milestone for the application of magnonics in practical devices. It has been demonstrated that parametric amplification provides an excellent tool to generate and to amplify spin waves in these systems in a wide wavevector range. In particular, the amplification greatly benefits from the discreteness of the spin-wave spectra since the size of the microstructures is comparable to the spin-wave wavelength. This opens up new, interesting routes of spin-wave amplification and manipulation. In this review, we will give an overview over the recent developments and achievements in this field.

  18. Pecan Research and Outreach in New Mexico: Logic Model Development and Change in Communication Paradigms

    ERIC Educational Resources Information Center

    Sammis, Theodore W.; Shukla, Manoj K.; Mexal, John G.; Wang, Junming; Miller, David R.

    2013-01-01

    Universities develop strategic planning documents, and as part of that planning process, logic models are developed for specific programs within the university. This article examines the long-standing pecan program at New Mexico State University and the deficiencies and successes in the evolution of its logic model. The university's agricultural…

  19. A retrospective review of the Honduras AIN-C program guided by a community health worker performance logic model.

    PubMed

    Rodríguez, Daniela C; Peterson, Lauren A

    2016-05-06

    Factors that influence performance of community health workers (CHWs) delivering health services are not well understood. A recent logic model proposed categories of support from both health sector and communities that influence CHW performance and program outcomes. This logic model has been used to review a growth monitoring program delivered by CHWs in Honduras, known as Atención Integral a la Niñez en la Comunidad (AIN-C). A retrospective review of AIN-C was conducted through a document desk review and supplemented with in-depth interviews. Documents were systematically coded using the categories from the logic model, and gaps were addressed through interviews. Authors reviewed coded data for each category to analyze program details and outcomes as well as identify potential issues and gaps in the logic model. Categories from the logic model were inconsistently represented, with more information available for health sector than community. Context and input activities were not well documented. Information on health sector systems-level activities was available for governance but limited for other categories, while not much was found for community systems-level activities. Most available information focused on program-level activities with substantial data on technical support. Output, outcome, and impact data were drawn from various resources and suggest mixed results of AIN-C on indicators of interest. Assessing CHW performance through a desk review left gaps that could not be addressed about the relationship of activities and performance. There were critical characteristics of program design that made it contextually appropriate; however, it was difficult to identify clear links between AIN-C and malnutrition indicators. Regarding the logic model, several categories were too broad (e.g., technical support, context) and some aspects of AIN-C did not fit neatly in logic model categories (e.g., political commitment, equity, flexibility in implementation). The CHW performance logic model has potential as a tool for program planning and evaluation but would benefit from additional supporting tools and materials to facilitate and operationalize its use.

  20. A Self-Paced Introductory Programming Course

    ERIC Educational Resources Information Center

    Gill, T. Grandon; Holton, Carolyn F.

    2006-01-01

    In this paper, a required introductory programming course being taught to MIS undergraduates using the C++ programming language is described. Two factors make the objectives of the course--which are to provide students with an exposure to the logical organization of the computer in addition to teaching them basic programming logic--particularly…

  1. An Overview of the Runtime Verification Tool Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.

  2. Complete all-optical processing polarization-based binary logic gates and optical processors.

    PubMed

    Zaghloul, Y A; Zaghloul, A R M

    2006-10-16

    We present a complete all-optical-processing polarization-based binary-logic system, by which any logic gate or processor can be implemented. Following the new polarization-based logic presented in [Opt. Express 14, 7253 (2006)], we develop a new parallel processing technique that allows for the creation of all-optical-processing gates that produce a unique output either logic 1 or 0 only once in a truth table, and those that do not. This representation allows for the implementation of simple unforced OR, AND, XOR, XNOR, inverter, and more importantly NAND and NOR gates that can be used independently to represent any Boolean expression or function. In addition, the concept of a generalized gate is presented which opens the door for reconfigurable optical processors and programmable optical logic gates. Furthermore, the new design is completely compatible with the old one presented in [Opt. Express 14, 7253 (2006)], and with current semiconductor based devices. The gates can be cascaded, where the information is always on the laser beam. The polarization of the beam, and not its intensity, carries the information. The new methodology allows for the creation of multiple-input-multiple-output processors that implement, by itself, any Boolean function, such as specialized or non-specialized microprocessors. Three all-optical architectures are presented: orthoparallel optical logic architecture for all known and unknown binary gates, singlebranch architecture for only XOR and XNOR gates, and the railroad (RR) architecture for polarization optical processors (POP). All the control inputs are applied simultaneously leading to a single time lag which leads to a very-fast and glitch-immune POP. A simple and easy-to-follow step-by-step algorithm is provided for the POP, and design reduction methodologies are briefly discussed. The algorithm lends itself systematically to software programming and computer-assisted design. As examples, designs of all binary gates, multiple-input gates, and sequential and non-sequential Boolean expressions are presented and discussed. The operation of each design is simply understood by a bullet train traveling at the speed of light on a railroad system preconditioned by the crossover states predetermined by the control inputs. The presented designs allow for optical processing of the information eliminating the need to convert it, back and forth, to an electronic signal for processing purposes. All gates with a truth table, including for example Fredkin, Toffoli, testable reversible logic, and threshold logic gates, can be designed and implemented using the railroad architecture. That includes any future gates not known today. Those designs and the quantum gates are not discussed in this paper.

  3. Artificial Intelligence (AI) Based Tactical Guidance for Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    McManus, John W.; Goodrich, Kenneth H.

    1990-01-01

    A research program investigating the use of Artificial Intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within Visual Range (WVR) air combat engagements is discussed. The application of AI programming and problem solving methods in the development and implementation of the Computerized Logic For Air-to-Air Warfare Simulations (CLAWS), a second generation TDG, is presented. The Knowledge-Based Systems used by CLAWS to aid in the tactical decision-making process are outlined in detail, and the results of tests to evaluate the performance of CLAWS versus a baseline TDG developed in FORTRAN to run in real-time in the Langley Differential Maneuvering Simulator (DMS), are presented. To date, these test results have shown significant performance gains with respect to the TDG baseline in one-versus-one air combat engagements, and the AI-based TDG software has proven to be much easier to modify and maintain than the baseline FORTRAN TDG programs. Alternate computing environments and programming approaches, including the use of parallel algorithms and heterogeneous computer networks are discussed, and the design and performance of a prototype concurrent TDG system are presented.

  4. OncoLogicTM

    EPA Science Inventory

    OncoLogicTM - A Computer System to Evaluate the Carcinogenic Potential of Chemicals
    OncoLogicTM is a software program that evaluates the likelihood that a chemical may cause cancer. OncoLogicTM has been peer reviewed and is being rele...

  5. Hand-Held Calculator Algorithms for Coastal Engineering.

    DTIC Science & Technology

    1982-01-01

    and water depth at the structure toe, ds. The development of the equation is derived on the solution sheet included with program 104R. Algorithm uses...Limited Design Breaking Wave Height at Structure (AOS logic)... .... ....... ......... .54 6. 105R Wave Transmission - Fuchs’ Equation (RPN logic...58 105A Wave Transmission - Fuchs’ Equation (AOS logic). . . . 61 APPENDIX BLANK PROGRAM FORMS ........ ....................... ... 67 4

  6. Coinductive Logic Programming with Negation

    NASA Astrophysics Data System (ADS)

    Min, Richard; Gupta, Gopal

    We introduce negation into coinductive logic programming (co-LP) via what we term Coinductive SLDNF (co-SLDNF) resolution. We present declarative and operational semantics of co-SLDNF resolution and present their equivalence under the restriction of rationality. Co-LP with co-SLDNF resolution provides a powerful, practical and efficient operational semantics for Fitting's Kripke-Kleene three-valued logic with restriction of rationality. Further, applications of co-SLDNF resolution are also discussed and illustrated where Co-SLDNF resolution allows one to develop elegant implementations of modal logics. Moreover it provides the capability of non-monotonic inference (e.g., predicate Answer Set Programming) that can be used to develop novel and effective first-order modal non-monotonic inference engines.

  7. LOGSIM user's manual. [Logic Simulation Program for computer aided design of logic circuits

    NASA Technical Reports Server (NTRS)

    Mitchell, C. L.; Taylor, J. F.

    1972-01-01

    The user's manual for the LOGSIM Program is presented. All program options are explained and a detailed definition of the format of each input card is given. LOGSIM Program operations, and the preparation of LOGSIM input data are discused along with data card formats, postprocessor data cards, and output interpretation.

  8. Logic Models: A Tool for Designing and Monitoring Program Evaluations. REL 2014-007

    ERIC Educational Resources Information Center

    Lawton, Brian; Brandon, Paul R.; Cicchinelli, Louis; Kekahio, Wendy

    2014-01-01

    introduction to logic models as a tool for designing program evaluations defines the major components of education programs--resources, activities, outputs, and short-, mid-, and long-term outcomes--and uses an example to demonstrate the relationships among them. This quick…

  9. A Logical Analysis of Quantum Voting Protocols

    NASA Astrophysics Data System (ADS)

    Rad, Soroush Rafiee; Shirinkalam, Elahe; Smets, Sonja

    2017-12-01

    In this paper we provide a logical analysis of the Quantum Voting Protocol for Anonymous Surveying as developed by Horoshko and Kilin in (Phys. Lett. A 375, 1172-1175 2011). In particular we make use of the probabilistic logic of quantum programs as developed in (Int. J. Theor. Phys. 53, 3628-3647 2014) to provide a formal specification of the protocol and to derive its correctness. Our analysis is part of a wider program on the application of quantum logics to the formal verification of protocols in quantum communication and quantum computation.

  10. Executing scatter operation to parallel computer nodes by repeatedly broadcasting content of send buffer partition corresponding to each node upon bitwise OR operation

    DOEpatents

    Archer, Charles J [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2009-11-06

    Executing a scatter operation on a parallel computer includes: configuring a send buffer on a logical root, the send buffer having positions, each position corresponding to a ranked node in an operational group of compute nodes and for storing contents scattered to that ranked node; and repeatedly for each position in the send buffer: broadcasting, by the logical root to each of the other compute nodes on a global combining network, the contents of the current position of the send buffer using a bitwise OR operation, determining, by each compute node, whether the current position in the send buffer corresponds with the rank of that compute node, if the current position corresponds with the rank, receiving the contents and storing the contents in a reception buffer of that compute node, and if the current position does not correspond with the rank, discarding the contents.

  11. Storage of sparse files using parallel log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a singlemore » patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.« less

  12. Constraint Logic Programming approach to protein structure prediction.

    PubMed

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  13. (Re) Making the Procrustean Bed? Standardization and Customization as Competing Logics in Healthcare

    PubMed Central

    Mannion, Russell; Exworthy, Mark

    2017-01-01

    Recent years have witnessed a parallel and seemingly contradictory trend towards both the standardization and the customization of healthcare and medical treatment. Here, we explore what is meant by ‘standardization’ and ‘customization’ in healthcare settings and explore the implications of these changes for healthcare delivery. We frame the paradox of these divergent and opposing factors in terms of institutional logics – the socially constructed rules, practices and beliefs which perpetuate institutional behaviour. As the tension between standardization and customization is fast becoming a critical fault-line within many health systems, there remains an urgent need for more sustained work exploring how these competing logics are articulated, adapted, resisted and co-exist on the front line of care delivery. PMID:28812821

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Kristan D.; Faraj, Daniel A.

    In a parallel computer, a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: for each compute node of the subcommunicator and for a number of dimensions beginning with a first dimension: establishing, by a plane building node, in a positive direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in a positive direction of a second dimension, where the second dimension is orthogonal to the first dimension; and establishing, by the plane building node, in a negative direction of the first dimension,more » all logical planes that include the plane building node and compute nodes of the subcommunicator in the positive direction of the second dimension.« less

  15. Proposal for nanoscale cascaded plasmonic majority gates for non-Boolean computation.

    PubMed

    Dutta, Sourav; Zografos, Odysseas; Gurunarayanan, Surya; Radu, Iuliana; Soree, Bart; Catthoor, Francky; Naeemi, Azad

    2017-12-19

    Surface-plasmon-polariton waves propagating at the interface between a metal and a dielectric, hold the key to future high-bandwidth, dense on-chip integrated logic circuits overcoming the diffraction limitation of photonics. While recent advances in plasmonic logic have witnessed the demonstration of basic and universal logic gates, these CMOS oriented digital logic gates cannot fully utilize the expressive power of this novel technology. Here, we aim at unraveling the true potential of plasmonics by exploiting an enhanced native functionality - the majority voter. Contrary to the state-of-the-art plasmonic logic devices, we use the phase of the wave instead of the intensity as the state or computational variable. We propose and demonstrate, via numerical simulations, a comprehensive scheme for building a nanoscale cascadable plasmonic majority logic gate along with a novel referencing scheme that can directly translate the information encoded in the amplitude and phase of the wave into electric field intensity at the output. Our MIM-based 3-input majority gate displays a highly improved overall area of only 0.636 μm 2 for a single-stage compared with previous works on plasmonic logic. The proposed device demonstrates non-Boolean computational capability and can find direct utility in highly parallel real-time signal processing applications like pattern recognition.

  16. Pressure driven digital logic in PDMS based microfluidic devices fabricated by multilayer soft lithography.

    PubMed

    Devaraju, Naga Sai Gopi K; Unger, Marc A

    2012-11-21

    Advances in microfluidics now allow an unprecedented level of parallelization and integration of biochemical reactions. However, one challenge still faced by the field has been the complexity and cost of the control hardware: one external pressure signal has been required for each independently actuated set of valves on chip. Using a simple post-modification to the multilayer soft lithography fabrication process, we present a new implementation of digital fluidic logic fully analogous to electronic logic with significant performance advances over the previous implementations. We demonstrate a novel normally closed static gain valve capable of modulating pressure signals in a fashion analogous to an electronic transistor. We utilize these valves to build complex fluidic logic circuits capable of arbitrary control of flows by processing binary input signals (pressure (1) and atmosphere (0)). We demonstrate logic gates and devices including NOT, NAND and NOR gates, bi-stable flip-flops, gated flip-flops (latches), oscillators, self-driven peristaltic pumps, delay flip-flops, and a 12-bit shift register built using static gain valves. This fluidic logic shows cascade-ability, feedback, programmability, bi-stability, and autonomous control capability. This implementation of fluidic logic yields significantly smaller devices, higher clock rates, simple designs, easy fabrication, and integration into MSL microfluidics.

  17. Using a source-to-source transformation to introduce multi-threading into the AliRoot framework for a parallel event reconstruction

    NASA Astrophysics Data System (ADS)

    Lohn, Stefan B.; Dong, Xin; Carminati, Federico

    2012-12-01

    Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.

  18. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  19. A Comparison of Linear and Systems Thinking Approaches for Program Evaluation Illustrated Using the Indiana Interdisciplinary GK-12

    ERIC Educational Resources Information Center

    Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa

    2009-01-01

    Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…

  20. Logic integer programming models for signaling networks.

    PubMed

    Haus, Utz-Uwe; Niermann, Kathrin; Truemper, Klaus; Weismantel, Robert

    2009-05-01

    We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this, we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in molecular biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included.

  1. Circuit for high resolution decoding of multi-anode microchannel array detectors

    NASA Technical Reports Server (NTRS)

    Kasle, David B. (Inventor)

    1995-01-01

    A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.

  2. Parallel reduced-instruction-set-computer architecture for real-time symbolic pattern matching

    NASA Astrophysics Data System (ADS)

    Parson, Dale E.

    1991-03-01

    This report discusses ongoing work on a parallel reduced-instruction- set-computer (RISC) architecture for automatic production matching. The PRIOPS compiler takes advantage of the memoryless character of automatic processing by translating a program's collection of automatic production tests into an equivalent combinational circuit-a digital circuit without memory, whose outputs are immediate functions of its inputs. The circuit provides a highly parallel, fine-grain model of automatic matching. The compiler then maps the combinational circuit onto RISC hardware. The heart of the processor is an array of comparators capable of testing production conditions in parallel, Each comparator attaches to private memory that contains virtual circuit nodes-records of the current state of nodes and busses in the combinational circuit. All comparator memories hold identical information, allowing simultaneous update for a single changing circuit node and simultaneous retrieval of different circuit nodes by different comparators. Along with the comparator-based logic unit is a sequencer that determines the current combination of production-derived comparisons to try, based on the combined success and failure of previous combinations of comparisons. The memoryless nature of automatic matching allows the compiler to designate invariant memory addresses for virtual circuit nodes, and to generate the most effective sequences of comparison test combinations. The result is maximal utilization of parallel hardware, indicating speed increases and scalability beyond that found for course-grain, multiprocessor approaches to concurrent Rete matching. Future work will consider application of this RISC architecture to the standard (controlled) Rete algorithm, where search through memory dominates portions of matching.

  3. Automation of Underground Cable Laying Equipment Using PLC and Hmi

    NASA Astrophysics Data System (ADS)

    Mal Kothari, Kesar; Samba, Vishweshwar; Tania, Kinza; Udayakumar, R., Dr; Karthikeyan, Ram, Dr

    2018-04-01

    Underground cable laying is an alternative for overhead cable laying of telecommunication and power transmission lines. It is becoming very popular in recent times because of some of its advantages over overhead cable laying. This type of cable laying is mostly practiced in developed countries because it is more expensive than overhead cable laying. Underground cable laying is more suitable when land is not available, and it also increases the aesthetics. This paper implements the automation on a manually operated cable pulling winch machine using programmable logic controller (PLC). Winch machines are useful in underground cable laying. The main aim of the project is to replace all the mechanical functions with electrical controls which are operated through a touch screen (HMI). The idea is that the machine should shift between parallel and series circuit automatically based on the pressure sensed instead of manually operating the solenoid valve. Traditional means of throttling the engine using lever and wire is replaced with a linear actuator. Sensors such as proximity, pressure and load sensor are used to provide the input to the system. The HMI used will display the speed, length and tension of the rope being winded. Ladder logic is used to program the PLC.

  4. Programming Programmable Logic Controller. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Lipsky, Kevin

    This training module on programming programmable logic controllers (PLC) is part of the memory structure and programming unit used in a packaging systems equipment control course. In the course, students assemble, install, maintain, and repair industrial machinery used in industry. The module contains description, objectives, content outline,…

  5. The Logic of Evaluation.

    ERIC Educational Resources Information Center

    Welty, Gordon A.

    The logic of the evaluation of educational and other action programs is discussed from a methodological viewpoint. However, no attempt is made to develop methods of evaluating programs. In Part I, the structure of an educational program is viewed as a system with three components--inputs, transformation of inputs into outputs, and outputs. Part II…

  6. A String Search Marketing Application Using Visual Programming

    ERIC Educational Resources Information Center

    Chin, Jerry M.; Chin, Mary H.; Van Landuyt, Cathryn

    2013-01-01

    This paper demonstrates the use of programing software that provides the student programmer visual cues to construct the code to a student programming assignment. This method does not disregard or minimize the syntax or required logical constructs. The student can concentrate more on the logic and less on the language itself.

  7. Implementing Eco-Logical 2014-2015 Annual Report

    DOT National Transportation Integrated Search

    2015-12-01

    The Eco-Logical approach offers an ecosystem-based framework for integrated infrastructure and natural resource planning, project development, and delivery. The 2014/2015 Implementing Eco-Logical Program Annual Report provides updates on the Federal ...

  8. Implementing and analyzing the multi-threaded LP-inference

    NASA Astrophysics Data System (ADS)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  9. Implementation of logic functions and computations by chemical kinetics

    NASA Astrophysics Data System (ADS)

    Hjelmfelt, A.; Ross, J.

    We review our work on the computational functions of the kinetics of chemical networks. We examine spatially homogeneous networks which are based on prototypical reactions occurring in living cells and show the construction of logic gates and sequential and parallel networks. This work motivates the study of an important biochemical pathway, glycolysis, and we demonstrate that the switch that controls the flux in the direction of glycolysis or gluconeogenesis may be described as a fuzzy AND operator. We also study a spatially inhomogeneous network which shares features of theoretical and biological neural networks.

  10. Post optimization paradigm in maximum 3-satisfiability logic programming

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  11. Eco-logical successes : January 2011

    DOT National Transportation Integrated Search

    2011-01-01

    This document identifies and explains each Eco-Logical signatory agency's strategic environmental programs, projects, and efforts that are either directly related to or share the vision set forth in Eco-Logical. A brief description of an agency's key...

  12. Is abstinence education theory based? The underlying logic of abstinence education programs in Texas.

    PubMed

    Goodson, Patricia; Pruitt, B E; Suther, Sandy; Wilson, Kelly; Buhi, Eric

    2006-04-01

    Authors examined the logic (or the implicit theory) underlying 16 abstinence-only-until-marriage programs in Texas (50% of all programs funded under the federal welfare reform legislation during 2001 and 2002). Defined as a set of propositions regarding the relationship between program activities and their intended outcomes, program staff's implicit theories were summarized and compared to (a) data from studies on adolescent sexual behavior, (b) a theory-based model of youth abstinent behavior, and (c) preliminary findings from the national evaluation of Title V programs. Authors interviewed 62 program directors and instructors and employed selected principles of grounded theory to analyze interview data. Findings indicated that abstinence education staff could clearly articulate the logic guiding program activity choices. Comparisons between interview data and a theory-based model of adolescent sexual behavior revealed striking similarities. Implications of these findings for conceptualizing and evaluating abstinence-only-until-marriage (or similar) programs are examined.

  13. Digital Optical Circuit Technology.

    DTIC Science & Technology

    1985-03-01

    ordinateurs ct des syst~mcs de diffusion de donn’es qui soient I la fois numcriques, entierement optiques. tres rapides etI I’abri des interferences et des...F.A.Hopf SESSION 11 - OPTICAL LOGIC PROSPECTS FOR PARALLEL NONLINEAR OPTICAL SIGNAL PROCESSING USING GaAs ETALONS AND ZnS INTERFERENCE FILTERS by...talks 1, 8, and 9) interference filters for room-temperature parallel processing. If one imposes a maximum heat load of 100 W/cm 2 , consistent with

  14. Two autowire versions for CDC-3200 and IBM-360

    NASA Technical Reports Server (NTRS)

    Billingsley, J. B.

    1972-01-01

    Microelectronics program was initiated to evaluate circuitry, packaging methods, and fabrication approaches necessary to produce completely procured logic system. Two autowire programs were developed for CDC-3200 and IBM-360 computers for use in designing logic systems.

  15. How Young Children Learn to Program with Sensor, Action, and Logic Blocks

    ERIC Educational Resources Information Center

    Wyeth, Peta

    2008-01-01

    Electronic Blocks are a new programming environment designed specifically for children aged between 3 and 8 years. These physical, stackable blocks include sensor blocks, action blocks, and logic blocks. By connecting these blocks, children can program a wide variety of structures that interact with one another and the environment. Electronic…

  16. Metalevel programming in robotics: Some issues

    NASA Technical Reports Server (NTRS)

    Kumarn, A.; Parameswaran, N.

    1987-01-01

    Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.

  17. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.

  18. Using logic models in a community-based agricultural injury prevention project.

    PubMed

    Helitzer, Deborah; Willging, Cathleen; Hathorn, Gary; Benally, Jeannie

    2009-01-01

    The National Institute for Occupational Safety and Health has long promoted the logic model as a useful tool in an evaluator's portfolio. Because a logic model supports a systematic approach to designing interventions, it is equally useful for program planners. Undertaken with community stakeholders, a logic model process articulates the underlying foundations of a particular programmatic effort and enhances program design and evaluation. Most often presented as sequenced diagrams or flow charts, logic models demonstrate relationships among the following components: statement of a problem, various causal and mitigating factors related to that problem, available resources to address the problem, theoretical foundations of the selected intervention, intervention goals and planned activities, and anticipated short- and long-term outcomes. This article describes a case example of how a logic model process was used to help community stakeholders on the Navajo Nation conceive, design, implement, and evaluate agricultural injury prevention projects.

  19. The Nature of Quantum Truth: Logic, Set Theory, & Mathematics in the Context of Quantum Theory

    NASA Astrophysics Data System (ADS)

    Frey, Kimberly

    The purpose of this dissertation is to construct a radically new type of mathematics whose underlying logic differs from the ordinary classical logic used in standard mathematics, and which we feel may be more natural for applications in quantum mechanics. Specifically, we begin by constructing a first order quantum logic, the development of which closely parallels that of ordinary (classical) first order logic --- the essential differences are in the nature of the logical axioms, which, in our construction, are motivated by quantum theory. After showing that the axiomatic first order logic we develop is sound and complete (with respect to a particular class of models), this logic is then used as a foundation on which to build (axiomatic) mathematical systems --- and we refer to the resulting new mathematics as "quantum mathematics." As noted above, the hope is that this form of mathematics is more natural than classical mathematics for the description of quantum systems, and will enable us to address some foundational aspects of quantum theory which are still troublesome --- e.g. the measurement problem --- as well as possibly even inform our thinking about quantum gravity. After constructing the underlying logic, we investigate properties of several mathematical systems --- e.g. axiom systems for abstract algebras, group theory, linear algebra, etc. --- in the presence of this quantum logic. In the process, we demonstrate that the resulting quantum mathematical systems have some strange, but very interesting features, which indicates a richness in the structure of mathematics that is classically inaccessible. Moreover, some of these features do indeed suggest possible applications to foundational questions in quantum theory. We continue our investigation of quantum mathematics by constructing an axiomatic quantum set theory, which we show satisfies certain desirable criteria. Ultimately, we hope that such a set theory will lead to a foundation for quantum mathematics in a sense which parallels the foundational role of classical set theory in classical mathematics. One immediate application of the quantum set theory we develop is to provide a foundation on which to construct quantum natural numbers, which are the quantum analog of the classical counting numbers. It turns out that in a special class of models, there exists a 1-1 correspondence between the quantum natural numbers and bounded observables in quantum theory whose eigenvalues are (ordinary) natural numbers. This 1-1 correspondence is remarkably satisfying, and not only gives us great confidence in our quantum set theory, but indicates the naturalness of such models for quantum theory itself. We go on to develop a Peano-like arithmetic for these new "numbers," as well as consider some of its consequences. Finally, we conclude by summarizing our results, and discussing directions for future work.

  20. Framework for analysis of guaranteed QOS systems

    NASA Astrophysics Data System (ADS)

    Chaudhry, Shailender; Choudhary, Alok

    1997-01-01

    Multimedia data is isochronous in nature and entails managing and delivering high volumes of data. Multiprocessors with their large processing power, vast memory, and fast interconnects, are an ideal candidate for the implementation of multimedia applications. Initially, multiprocessors were designed to execute scientific programs and thus their architecture was optimized to provide low message latency and efficiently support regular communication patterns. Hence, they have a regular network topology and most use wormhole routing. The design offers the benefits of a simple router, small buffer size, and network latency that is almost independent of path length. Among the various multimedia applications, video on demand (VOD) server is well-suited for implementation using parallel multiprocessors. Logical models for VOD servers are presently mapped onto multiprocessors. Our paper provides a framework for calculating bounds on utilization of system resources with which QoS parameters for each isochronous stream can be guaranteed. Effects of the architecture of multiprocessors, and efficiency of various local models and mapping on particular architectures can be investigated within our framework. Our framework is based on rigorous proofs and provides tight bounds. The results obtained may be used as the basis for admission control tests. To illustrate the versatility of our framework, we provide bounds on utilization for various logical models applied to mesh connected architectures for a video on demand server. Our results show that worm hole routing can lead to packets waiting for transmission of other packets that apparently share no common resources. This situation is analogous to head-of-the-line blocking. We find that the provision of multiple VCs per link and multiple flit buffers improves utilization (even under guaranteed QoS parameters). This analogous to parallel iterative matching.

  1. Controllability of switched singular mix-valued logical control networks with constraints

    NASA Astrophysics Data System (ADS)

    Deng, Lei; Gong, Mengmeng; Zhu, Peiyong

    2018-03-01

    The present paper investigates the controllability problem of switched singular mix-valued logical control networks (SSMLCNs) with constraints on states and controls. First, using the semi-tenser product (STP) of matrices, the SSMLCN is expressed in an algebraic form, based on which a necessary and sufficient condition is given for the uniqueness of solution of SSMLCNs. Second, a necessary and sufficient criteria is derived for the controllability of constrained SSMLCNs, by converting a constrained SSMLCN into a parallel constrained switched mix-valued logical control network. Third, an algorithm is presented to design a proper switching sequence and a control scheme which force a state to a reachable state. Finally, a numerical example is given to demonstrate the efficiency of the results obtained in this paper.

  2. An adaptive maneuvering logic computer program for the simulation of one-to-one air-to-air combat. Volume 2: Program description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Owens, A. J.

    1975-01-01

    A detailed description is presented of the computer programs in order to provide an understanding of the mathematical and geometrical relationships as implemented in the programs. The individual sbbroutines and their underlying mathematical relationships are described, and the required input data and the output provided by the program are explained. The relationship of the adaptive maneuvering logic program with the program to drive the differential maneuvering simulator is discussed.

  3. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  4. A computer program for the generation of logic networks from task chart data

    NASA Technical Reports Server (NTRS)

    Herbert, H. E.

    1980-01-01

    The Network Generation Program (NETGEN), which creates logic networks from task chart data is presented. NETGEN is written in CDC FORTRAN IV (Extended) and runs in a batch mode on the CDC 6000 and CYBER 170 series computers. Data is input via a two-card format and contains information regarding the specific tasks in a project. From this data, NETGEN constructs a logic network of related activities with each activity having unique predecessor and successor nodes, activity duration, descriptions, etc. NETGEN then prepares this data on two files that can be used in the Project Planning Analysis and Reporting System Batch Network Scheduling program and the EZPERT graphics program.

  5. Developing and Using a Logic Model for Evaluation and Assessment of University Student Affairs Programming: A Case Study

    ERIC Educational Resources Information Center

    Cooper, Jeff

    2009-01-01

    This dissertation addresses theory and practice of evaluation and assessment in university student affairs, by applying logic modeling/program theory to a case study. I intend to add knowledge to ongoing dialogue among evaluation scholars and practitioners on student affairs program planning and improvement as integral considerations that serve…

  6. Student Perceptions of Instructional Tools in Programming Logic: A Comparison of Traditional versus Alice Teaching Environments

    ERIC Educational Resources Information Center

    Schultz, Leah

    2011-01-01

    This research investigates the implementation of the programming language Alice to teach computer programming logic to computer information systems students. Alice has been implemented in other university settings and has been reported to have many benefits including object-oriented concepts and an engaging and fun learning environment. In this…

  7. IT0: Discrete Math and Programming Logic Topics as a Hybrid Alternative to CS0

    ERIC Educational Resources Information Center

    Martin, Nancy L.

    2015-01-01

    This paper describes the development of a hybrid introductory course for students in their first or second year of an information systems technologies degree program at a large Midwestern university. The course combines topics from discrete mathematics and programming logic and design, a unique twist on most introductory courses. The objective of…

  8. Using the Logic Model to Plan Extension and Outreach Program Development and Scholarship

    ERIC Educational Resources Information Center

    Corbin, Marilyn; Kiernan, Nancy Ellen; Koble, Margaret A.; Watson, Jack; Jackson, Daney

    2004-01-01

    In searching for a process to help program teams of campus-based faculty and field-based educators develop five-year and annual statewide program plans, cooperative extension administrators and specialists in Penn State's College of Agricultural Sciences discovered that the use of the logic model process can influence the successful design of…

  9. Teaching and Learning Logic Programming in Virtual Worlds Using Interactive Microworld Representations

    ERIC Educational Resources Information Center

    Vosinakis, Spyros; Anastassakis, George; Koutsabasis, Panayiotis

    2018-01-01

    Logic Programming (LP) follows the declarative programming paradigm, which novice students often find hard to grasp. The limited availability of visual teaching aids for LP can lead to low motivation for learning. In this paper, we present a platform for teaching and learning Prolog in Virtual Worlds, which enables the visual interpretation and…

  10. TRANSMISSION NETWORK PLANNING METHOD FOR COMPARATIVE STUDIES (JOURNAL VERSION)

    EPA Science Inventory

    An automated transmission network planning method for comparative studies is presented. This method employs logical steps that may closely parallel those taken in practice by the planning engineers. Use is made of a sensitivity matrix to simulate the engineers' experience in sele...

  11. Single flux quantum voltage amplifiers

    NASA Astrophysics Data System (ADS)

    Golomidov, Vladimir; Kaplunenko, Vsevolod; Khabipov, Marat; Koshelets, Valery; Kaplunenko, Olga

    The novel elements of the Rapid Single Flux Quantum (RSFQ) logic family — a Quasi Digital Voltage Parallel and Series Amplifiers (QDVA) have been computer simulated, designed and experimentally investigated. The Parallel QDVA consists of six stages and provides multiplication of the input voltage with factor five. The output resistance of the QDVA is five times larger than the input so this amplifier seems to be a good matching stage between RSFQL and usual semiconductor electronics. The series QDVA provides a gain factor four and involves two doublers connected by transmission line. The proposed parallel QDVA can be integrated on the same chip with a SQUID sensor.

  12. Modified Method of Adaptive Artificial Viscosity for Solution of Gas Dynamics Problems on Parallel Computer Systems

    NASA Astrophysics Data System (ADS)

    Popov, Igor; Sukov, Sergey

    2018-02-01

    A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.

  13. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  14. MIRAP, microcomputer reliability analysis program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jehee, J.N.T.

    1989-01-01

    A program for a microcomputer is outlined that can determine minimal cut sets from a specified fault tree logic. The speed and memory limitations of the microcomputers on which the program is implemented (Atari ST and IBM) are addressed by reducing the fault tree's size and by storing the cut set data on disk. Extensive well proven fault tree restructuring techniques, such as the identification of sibling events and of independent gate events, reduces the fault tree's size but does not alter its logic. New methods are used for the Boolean reduction of the fault tree logic. Special criteria formore » combining events in the 'AND' and 'OR' logic avoid the creation of many subsuming cut sets which all would cancel out due to existing cut sets. Figures and tables illustrates these methods. 4 refs., 5 tabs.« less

  15. [Documenting a rehabilitation program using a logic model: an advantage to the assessment process].

    PubMed

    Poncet, Frédérique; Swaine, Bonnie; Pradat-Diehl, Pascale

    2017-03-06

    The cognitive and behavioral disorders after brain injury can result in severe limitations of activities and restrictions of participation. An interdisciplinary rehabilitation program was developed in physical medicine and rehabilitation at the Pitié-Salpêtriere Hospital, Paris, France. Clinicians believe this program decreases activity limitations and improves participation in patients. However, the program’s effectiveness had never been assessed. To do this, we had to define/describe this program. However rehabilitation programs are holistic and thus complex making them difficult to describe. Therefore, to facilitate the evaluation of complex programs, including those for rehabilitation, we illustrate the use of a theoretical logic model, as proposed by Champagne, through the process of documentation of a specific complex and interdisciplinary rehabilitation program. Through participatory/collaborative research, the rehabilitation program was analyzed using three “submodels” of the logic model of intervention: causal model, intervention model and program theory model. This should facilitate the evaluation of programs, including those for rehabilitation.

  16. Logic Design Pathology and Space Flight Electronics

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Barto, Rod L.; Erickson, Ken

    1999-01-01

    This paper presents a look at logic design from early in the US Space Program and examines faults in recent logic designs. Most examples are based on flight hardware failures and analysis of new tools and techniques. The paper is presented in viewgraph form.

  17. A high-speed on-chip pseudo-random binary sequence generator for multi-tone phase calibration

    NASA Astrophysics Data System (ADS)

    Gommé, Liesbeth; Vandersteen, Gerd; Rolain, Yves

    2011-07-01

    An on-chip reference generator is conceived by adopting the technique of decimating a pseudo-random binary sequence (PRBS) signal in parallel sequences. This is of great benefit when high-speed generation of PRBS and PRBS-derived signals is the objective. The design implemented standard CMOS logic is available in commercial libraries to provide the logic functions for the generator. The design allows the user to select the periodicity of the PRBS and the PRBS-derived signals. The characterization of the on-chip generator marks its performance and reveals promising specifications.

  18. Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.

    PubMed

    Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P

    2017-09-01

    Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.

  19. Hardware Algorithm Implementation for Mission Specific Processing

    DTIC Science & Technology

    2008-03-01

    knowledge about the VLSI technology and understands VHDL, scripting, and intergrating the script in Cadencersoftware pro- gram or Modelsimr. The main...possible to have a trade off between parallel and serial logic design for the circuit. Power can be saved by using parallization, pipelining, or a

  20. Questionnaire Construction Manual

    DTIC Science & Technology

    1976-07-01

    fwiW ........ ..., „.,. , r-m-lili^fa^BMiai igMiit VI-C Page 3 1 Jul 76 (2) All questionnaire items should be gramatically correct. (3) All...kept in mind: a. All response alternatives should follow the stem both gramatically and logically, and if possible, be parallel in structure. b

  1. Exploration of picture grammars, grammar learning, and inductive logic programming for image understanding

    NASA Astrophysics Data System (ADS)

    Ducksbury, P. G.; Kennedy, C.; Lock, Z.

    2003-09-01

    Grammars have been used for the formal specification of programming languages, and there are a number of commercial products which now use grammars. However, these have tended to be focused mainly on flow control type applications. In this paper, we consider the potential use of picture grammars and inductive logic programming in generic image understanding applications, such as object recognition. A number of issues are considered, such as what type of grammar needs to be used, how to construct the grammar with its associated attributes, difficulties encountered with parsing grammars followed by issues of automatically learning grammars using a genetic algorithm. The concept of inductive logic programming is then introduced as a method that can overcome some of the earlier difficulties.

  2. Logic programming and metadata specifications

    NASA Technical Reports Server (NTRS)

    Lopez, Antonio M., Jr.; Saacks, Marguerite E.

    1992-01-01

    Artificial intelligence (AI) ideas and techniques are critical to the development of intelligent information systems that will be used to collect, manipulate, and retrieve the vast amounts of space data produced by 'Missions to Planet Earth.' Natural language processing, inference, and expert systems are at the core of this space application of AI. This paper presents logic programming as an AI tool that can support inference (the ability to draw conclusions from a set of complicated and interrelated facts). It reports on the use of logic programming in the study of metadata specifications for a small problem domain of airborne sensors, and the dataset characteristics and pointers that are needed for data access.

  3. Reasoning on Weighted Delegatable Authorizations

    NASA Astrophysics Data System (ADS)

    Ruan, Chun; Varadharajan, Vijay

    This paper studies logic based methods for representing and evaluating complex access control policies needed by modern database applications. In our framework, authorization and delegation rules are specified in a Weighted Delegatable Authorization Program (WDAP) which is an extended logic program. We show how extended logic programs can be used to specify complex security policies which support weighted administrative privilege delegation, weighted positive and negative authorizations, and weighted authorization propagations. We also propose a conflict resolution method that enables flexible delegation control by considering priorities of authorization grantors and weights of authorizations. A number of rules are provided to achieve delegation depth control, conflict resolution, and authorization and delegation propagations.

  4. Compiled MPI: Cost-Effective Exascale Applications Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Quinlan, D; Lumsdaine, A

    2012-04-10

    The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less

  5. Using RUFDATA to guide a logic model for a quality assurance process in an undergraduate university program.

    PubMed

    Sherman, Paul David

    2016-04-01

    This article presents a framework to identify key mechanisms for developing a logic model blueprint that can be used for an impending comprehensive evaluation of an undergraduate degree program in a Canadian university. The evaluation is a requirement of a comprehensive quality assurance process mandated by the university. A modified RUFDATA (Saunders, 2000) evaluation model is applied as an initiating framework to assist in decision making to provide a guide for conceptualizing a logic model for the quality assurance process. This article will show how an educational evaluation is strengthened by employing a RUFDATA reflective process in exploring key elements of the evaluation process, and then translating this information into a logic model format that could serve to offer a more focussed pathway for the quality assurance activities. Using preliminary program evaluation data from two key stakeholders of the undergraduate program as well as an audit of the curriculum's course syllabi, a case is made for, (1) the importance of inclusivity of key stakeholders participation in the design of the evaluation process to enrich the authenticity and accuracy of program participants' feedback, and (2) the diversification of data collection methods to ensure that stakeholders' narrative feedback is given ample exposure. It is suggested that the modified RUFDATA/logic model framework be applied to all academic programs at the university undergoing the quality assurance process at the same time so that economies of scale may be realized. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Improving the human readability of Arden Syntax medical logic modules using a concept-oriented terminology and object-oriented programming expressions.

    PubMed

    Choi, Jeeyae; Bakken, Suzanne; Lussier, Yves A; Mendonça, Eneida A

    2006-01-01

    Medical logic modules are a procedural representation for sharing task-specific knowledge for decision support systems. Based on the premise that clinicians may perceive object-oriented expressions as easier to read than procedural rules in Arden Syntax-based medical logic modules, we developed a method for improving the readability of medical logic modules. Two approaches were applied: exploiting the concept-oriented features of the Medical Entities Dictionary and building an executable Java program to replace Arden Syntax procedural expressions. The usability evaluation showed that 66% of participants successfully mapped all Arden Syntax rules to Java methods. These findings suggest that these approaches can play an essential role in the creation of human readable medical logic modules and can potentially increase the number of clinical experts who are able to participate in the creation of medical logic modules. Although our approaches are broadly applicable, we specifically discuss the relevance to concept-oriented nursing terminologies and automated processing of task-specific nursing knowledge.

  7. Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-Processor vs. Multi-Threaded Shared Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Marquez, Andres; Choudhury, Sutanay

    2012-09-01

    Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less

  8. Compact modeling of CRS devices based on ECM cells for memory, logic and neuromorphic applications.

    PubMed

    Linn, E; Menzel, S; Ferch, S; Waser, R

    2013-09-27

    Dynamic physics-based models of resistive switching devices are of great interest for the realization of complex circuits required for memory, logic and neuromorphic applications. Here, we apply such a model of an electrochemical metallization (ECM) cell to complementary resistive switches (CRSs), which are favorable devices to realize ultra-dense passive crossbar arrays. Since a CRS consists of two resistive switching devices, it is straightforward to apply the dynamic ECM model for CRS simulation with MATLAB and SPICE, enabling study of the device behavior in terms of sweep rate and series resistance variations. Furthermore, typical memory access operations as well as basic implication logic operations can be analyzed, revealing requirements for proper spike and level read operations. This basic understanding facilitates applications of massively parallel computing paradigms required for neuromorphic applications.

  9. Pausing and activating thread state upon pin assertion by external logic monitoring polling loop exit time condition

    DOEpatents

    Chen, Dong; Giampapa, Mark; Heidelberger, Philip; Ohmacht, Martin; Satterfield, David L; Steinmacher-Burow, Burkhard; Sugavanam, Krishnan

    2013-05-21

    A system and method for enhancing performance of a computer which includes a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program are executed by a processer. The processor processes instructions from the program. A wait state in the processor waits for receiving specified data. A thread in the processor has a pause state wherein the processor waits for specified data. A pin in the processor initiates a return to an active state from the pause state for the thread. A logic circuit is external to the processor, and the logic circuit is configured to detect a specified condition. The pin initiates a return to the active state of the thread when the specified condition is detected using the logic circuit.

  10. Answer Sets in a Fuzzy Equilibrium Logic

    NASA Astrophysics Data System (ADS)

    Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine

    Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.

  11. Monitoring Java Programs with Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present recent work on the development Java PathExplorer (JPAX), a tool for monitoring the execution of Java programs. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's late code which will then omit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, for example concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in the Maude rewriting logic, and then can either be directly checked using the Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

  12. Logic and Simulation.

    ERIC Educational Resources Information Center

    Straumanis, Joan

    A major problem in teaching symbolic logic is that of providing individualized and early feedback to students who are learning to do proofs. To overcome this difficulty, a computer program was developed which functions as a line-by-line proof checker in Sentential Calculus. The program, DEMON, first evaluates any statement supplied by the student…

  13. Psyche/Logos: Mapping the Terrains of Mind and Rhetoric.

    ERIC Educational Resources Information Center

    Baumlin, James S.; Baumlin, Tita French

    1989-01-01

    Discusses rhetoric as mirroring psychology. Examines Aristotle's three "pisteis"--the pathetic, logical, and ethical proofs, paralleling them to Freud's id, ego, and super-ego. Explores an adequate feminine psychology and a corresponding rhetoric. Outlines two models of persuasive discourse, the rational world paradigm and the narrative…

  14. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  15. Logic Programming in LISP.

    DTIC Science & Technology

    1981-01-01

    THIS PAGZ(Whan Doee Es tMord) Item 20 (Cont’d) ------ work in the area of artificial intelligence and those used in general program development into a...Controlling Gfile) IS. SECURITY CLASS. (of tis report) Same .,/ UNCLASSIFIED 13d. DECLASSIFICATION/ DOWN GRADING ..- ". .--- /A!CHEDULEI t I IS...logic programming with LISP for implementing intelligent data base query systems. Continued developments will allow for enhancements to be made to the

  16. Logic Models: A Tool for Effective Program Planning, Collaboration, and Monitoring. REL 2014-025

    ERIC Educational Resources Information Center

    Kekahio, Wendy; Lawton, Brian; Cicchinelli, Louis; Brandon, Paul R.

    2014-01-01

    A logic model is a visual representation of the assumptions and theory of action that underlie the structure of an education program. A program can be a strategy for instruction in a classroom, a training session for a group of teachers, a grade-level curriculum, a building-level intervention, or a district-or statewide initiative. This guide, an…

  17. Programmable Logic Controllers. Teacher Edition.

    ERIC Educational Resources Information Center

    Rauh, Bob; Kaltwasser, Stan

    These materials were developed for a seven-unit secondary or postsecondary education course on programmable logic controllers (PLCs) that treats most of the skills needed to work effectively with PLCs as programming skills. The seven units of the course cover the following topics: fundamentals of programmable logic controllers; contracts, timers,…

  18. UTP and Temporal Logic Model Checking

    NASA Astrophysics Data System (ADS)

    Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo

    In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures

  19. Boolean logic tree of graphene-based chemical system for molecular computation and intelligent molecular search query.

    PubMed

    Huang, Wei Tao; Luo, Hong Qun; Li, Nian Bing

    2014-05-06

    The most serious, and yet unsolved, problem of constructing molecular computing devices consists in connecting all of these molecular events into a usable device. This report demonstrates the use of Boolean logic tree for analyzing the chemical event network based on graphene, organic dye, thrombin aptamer, and Fenton reaction, organizing and connecting these basic chemical events. And this chemical event network can be utilized to implement fluorescent combinatorial logic (including basic logic gates and complex integrated logic circuits) and fuzzy logic computing. On the basis of the Boolean logic tree analysis and logic computing, these basic chemical events can be considered as programmable "words" and chemical interactions as "syntax" logic rules to construct molecular search engine for performing intelligent molecular search query. Our approach is helpful in developing the advanced logic program based on molecules for application in biosensing, nanotechnology, and drug delivery.

  20. Constrained Subjective Assessment of Student Learning

    ERIC Educational Resources Information Center

    Saliu, Sokol

    2005-01-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a…

  1. Sight Application Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G.

    2014-09-17

    The scale and complexity of scientific applications makes it very difficult to optimize, debug and extend them to support new capabilities. We have developed a tool that supports developers’ efforts to understand the logical flow of their applications and interactions between application components and hardware in a way that scales with application complexity and parallelism.

  2. Drawing Analogies between Logic Programming and Natural Language Argumentation Texts to Scaffold Learners' Understanding

    ERIC Educational Resources Information Center

    Ragonis, Noa; Shilo, Gila

    2014-01-01

    The paper presents a theoretical investigational study of the potential advantages that secondary school learners may gain from learning two different subjects, namely, logic programming within computer science studies and argumentation texts within linguistics studies. The study suggests drawing an analogy between the two subjects since they both…

  3. Application of Logic Models in a Large Scientific Research Program

    ERIC Educational Resources Information Center

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  4. Semi-Structured Interview Protocol for Constructing Logic Models

    ERIC Educational Resources Information Center

    Gugiu, P. Cristian; Rodriguez-Campos, Liliana

    2007-01-01

    This paper details a semi-structured interview protocol that evaluators can use to develop a logic model of a program's services and outcomes. The protocol presents a series of questions, which evaluators can ask of specific program informants, that are designed to: (1) identify key informants basic background and contextual information, (2)…

  5. Implementing a Knowledge-Based Library Information System with Typed Horn Logic.

    ERIC Educational Resources Information Center

    Ait-Kaci, Hassan; And Others

    1990-01-01

    Describes a prototype library expert system called BABEL which uses a new programing language, LOGIN, that combines the idea of attribute inheritance with logic programing. Use of hierarchical classification of library objects to build a knowledge base for a library information system is explained, and further research is suggested. (11…

  6. Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.

    PubMed

    Mørk, Søren; Holmes, Ian

    2012-03-01

    Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.

  7. The Effect of Scratch- and Lego Mindstorms Ev3-Based Programming Activities on Academic Achievement, Problem-Solving Skills and Logical-Mathematical Thinking Skills of Students

    ERIC Educational Resources Information Center

    Korkmaz, Özgen

    2016-01-01

    The aim of this study was to investigate the effect of the Scratch and Lego Mindstorms Ev3 programming activities on academic achievement with respect to computer programming, and on the problem-solving and logical-mathematical thinking skills of students. This study was a semi-experimental, pretest-posttest study with two experimental groups and…

  8. Efficient dynamic optimization of logic programs

    NASA Technical Reports Server (NTRS)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  9. DESIGN METHODOLOGIES AND TOOLS FOR SINGLE-FLUX QUANTUM LOGIC CIRCUITS

    DTIC Science & Technology

    2017-10-01

    DESIGN METHODOLOGIES AND TOOLS FOR SINGLE-FLUX QUANTUM LOGIC CIRCUITS UNIVERSITY OF SOUTHERN CALIFORNIA OCTOBER 2017 FINAL...SUBTITLE DESIGN METHODOLOGIES AND TOOLS FOR SINGLE-FLUX QUANTUM LOGIC CIRCUITS 5a. CONTRACT NUMBER FA8750-15-C-0203 5b. GRANT NUMBER N/A 5c. PROGRAM...of this project was to investigate the state-of-the-art in design and optimization of single-flux quantum (SFQ) logic circuits, e.g., RSFQ and ERSFQ

  10. A Current Logical Framework: The Propositional Fragment

    DTIC Science & Technology

    2003-01-01

    Under the Curry- Howard isomorphism, M can also be read as a proof term, and A as a proposition of intuitionistic linear logic in its formulation as DILL...the obliga- tion to ensure that the underlying logic (via the Curry- Howard isomorphism, if you like) is sensible. In particular, the principles of...Proceedings of the International Logic Programming Symposium (ILPS󈨣), pages 51-65, Portland, Oregon, December 1995. MIT Press. 6. G. Bellin and P. J

  11. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  12. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  13. General purpose programmable accelerator board

    DOEpatents

    Robertson, Perry J.; Witzke, Edward L.

    2001-01-01

    A general purpose accelerator board and acceleration method comprising use of: one or more programmable logic devices; a plurality of memory blocks; bus interface for communicating data between the memory blocks and devices external to the board; and dynamic programming capabilities for providing logic to the programmable logic device to be executed on data in the memory blocks.

  14. California Geriatric Education Center Logic Model: An Evaluation and Communication Tool

    ERIC Educational Resources Information Center

    Price, Rachel M.; Alkema, Gretchen E.; Frank, Janet C.

    2009-01-01

    A logic model is a communications tool that graphically represents a program's resources, activities, priority target audiences for change, and the anticipated outcomes. This article describes the logic model development process undertaken by the California Geriatric Education Center in spring 2008. The CGEC is one of 48 Geriatric Education…

  15. Learning Probabilistic Logic Models from Probabilistic Examples

    PubMed Central

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2009-01-01

    Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348

  16. Learning Probabilistic Logic Models from Probabilistic Examples.

    PubMed

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  17. Automata-Based Verification of Temporal Properties on Running Programs

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)

    2001-01-01

    This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  18. An Introduction to Logic Control Systems for the Behavioral Scientist, Part I, Text.

    ERIC Educational Resources Information Center

    Larsen, Lawrence A.

    This programed instruction course gives a basic introduction to solid state programing equipment. Course objectives include giving the student (1) a working knowledge of the various types of units used in building digital logic control systems and (2) an idea of how they interconnect to perform different functions. The course has no prerequisites…

  19. The Application of LOGO! in Control System of a Transmission and Sorting Mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Lv, Yuan-Jun

    Logic programming of general logic control module LOGO! has been recommended the application in transmission and sorting mechanism. First, the structure and operating principle of the mechanism had been introduced. Then the pneumatic loop of the mechanism had been plotted in the software of FluidSIM-P. At last, pneumatic loop and motors had been control by LOGO!, which makes the control process simple and clear instead of the complicated control of ordinary relay. LOGO! can achieve the complicated interlock control composed of inter relays and time relays. In the control process, the logic control function of LOGO! is fully used to logic programming so that the system realizes the control of air cylinder and motor. It is reliable and adjustable mechanism after application.

  20. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  1. A Tutorial on Parallel and Concurrent Programming in Haskell

    NASA Astrophysics Data System (ADS)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  2. Simulated Laboratory in Digital Logic.

    ERIC Educational Resources Information Center

    Cleaver, Thomas G.

    Design of computer circuits used to be a pencil and paper task followed by laboratory tests, but logic circuit design can now be done in half the time as the engineer accesses a program which simulates the behavior of real digital circuits, and does all the wiring and testing on his computer screen. A simulated laboratory in digital logic has been…

  3. A Public Service-Dominant Logic for the Executive Education of Public Managers

    ERIC Educational Resources Information Center

    Hiedemann, Alexander M.; Nasi, Greta; Saporito, Raffaella

    2017-01-01

    Building on the concept of Public Service-Dominant Logic (PSDL), this article aims to apply the public service-dominant logic to executive education. We argue that fit-for-purpose and effective executive master programs for public managers (EMPA) need to be designed from a public service perspective. Framing executive education as a service…

  4. Reinforcing Geometric Properties with Shapedoku Puzzles

    ERIC Educational Resources Information Center

    Wanko, Jeffrey J.; Nickell, Jennifer V.

    2013-01-01

    Shapedoku is a new type of puzzle that combines logic and spatial reasoning with understanding of basic geometric concepts such as slope, parallelism, perpendicularity, and properties of shapes. Shapedoku can be solved by individuals and, as demonstrated here, can form the basis of a review for geometry students as they create their own. In this…

  5. A Practical Methodology for the Systematic Development of Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Blumberg, Phyllis; Felner, Joel

    Using Guttman's facet design analysis, four parallel forms of a multiple-choice test were developed. A mapping sentence, logically representing the universe of content of a basic cardiology course, specified the facets of the course and the semantic structural units linking them. The facets were: cognitive processes, disease priority, specific…

  6. Versatile solid-state relay

    NASA Technical Reports Server (NTRS)

    Fox, D. A.

    1977-01-01

    Solid-state relay (SSR), containing multinode control logic, is operated as normally open, normally closed, or latched. Moreover several can be paralleled to form two-pole or double-throw relays. Versatile unit ends need to design custom control circuit for every relay application. Technique can be extended to incorporate selectable time delay, on operation or release, or pulsed output.

  7. First CLIPS Conference Proceedings, volume 2

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The topics of volume 2 of First CLIPS Conference are associated with following applications: quality control; intelligent data bases and networks; Space Station Freedom; Space Shuttle and satellite; user interface; artificial neural systems and fuzzy logic; parallel and distributed processing; enchancements to CLIPS; aerospace; simulation and defense; advisory systems and tutors; and intelligent control.

  8. Fuzzy Logic Based Autonomous Parallel Parking System with Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Panomruttanarug, Benjamas; Higuchi, Kohji

    This paper presents an emulation of fuzzy logic control schemes for an autonomous parallel parking system in a backward maneuver. There are four infrared sensors sending the distance data to a microcontroller for generating an obstacle-free parking path. Two of them mounted on the front and rear wheels on the parking side are used as the inputs to the fuzzy rules to calculate a proper steering angle while backing. The other two attached to the front and rear ends serve for avoiding collision with other cars along the parking space. At the end of parking processes, the vehicle will be in line with other parked cars and positioned in the middle of the free space. Fuzzy rules are designed based upon a wall following process. Performance of the infrared sensors is improved using Kalman filtering. The design method needs extra information from ultrasonic sensors. Starting from modeling the ultrasonic sensor in 1-D state space forms, one makes use of the infrared sensor as a measurement to update the predicted values. Experimental results demonstrate the effectiveness of sensor improvement.

  9. Using Weighted Constraints to Diagnose Errors in Logic Programming--The Case of an Ill-Defined Domain

    ERIC Educational Resources Information Center

    Le, Nguyen-Thinh; Menzel, Wolfgang

    2009-01-01

    In this paper, we introduce logic programming as a domain that exhibits some characteristics of being ill-defined. In order to diagnose student errors in such a domain, we need a means to hypothesise the student's intention, that is the strategy underlying her solution. This is achieved by weighting constraints, so that hypotheses about solution…

  10. Improving Running Times for the Determination of Fractional Snow-Covered Area from Landsat TM/ETM+ via Utilization of the CUDA® Programming Paradigm

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Rittger, K.; Painter, T. H.; Selkowitz, D.; Mattmann, C. A.; Ramirez, P.

    2014-12-01

    As part of a JPL-USGS collaboration to expand distribution of essential climate variables (ECV) to include on-demand fractional snow cover we describe our experience and implementation of a shift towards the use of NVIDIA's CUDA® parallel computing platform and programming model. In particular the on-demand aspect of this work involves the improvement (via faster processing and a reduction in overall running times) for determination of fractional snow-covered area (fSCA) from Landsat TM/ETM+. Our observations indicate that processing tasks associated with remote sensing including the Snow Covered Area and Grain Size Model (SCAG) when applied to MODIS or LANDSAT TM/ETM+ are computationally intensive processes. We believe the shift to the CUDA programming paradigm represents a significant improvement in the ability to more quickly assert the outcomes of such activities. We use the TMSCAG model as our subject to highlight this argument. We do this by describing how we can ingest a LANDSAT surface reflectance image (typically provided in HDF format), perform spectral mixture analysis to produce land cover fractions including snow, vegetation and rock/soil whilst greatly reducing running time for such tasks. Within the scope of this work we first document the original workflow used to assert fSCA for Landsat TM and it's primary shortcomings. We then introduce the logic and justification behind the switch to the CUDA paradigm for running single as well as batch jobs on the GPU in order to achieve parallel processing. Finally we share lessons learned from the implementation of myriad of existing algorithms to a single set of code in a single target language as well as benefits this ultimately provides scientists at the USGS.

  11. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  12. Support for non-locking parallel reception of packets belonging to a single memory reception FIFO

    DOEpatents

    Chen, Dong [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Salapura, Valentina [Yorktown Heights, NY; Senger, Robert M [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugawara, Yutaka [Yorktown Heights, NY

    2011-01-27

    A method and apparatus for distributed parallel messaging in a parallel computing system. A plurality of DMA engine units are configured in a multiprocessor system to operate in parallel, one DMA engine unit for transferring a current packet received at a network reception queue to a memory location in a memory FIFO (rmFIFO) region of a memory. A control unit implements logic to determine whether any prior received packet destined for that rmFIFO is still in a process of being stored in the associated memory by another DMA engine unit of the plurality, and prevent the one DMA engine unit from indicating completion of storing the current received packet in the reception memory FIFO (rmFIFO) until all prior received packets destined for that rmFIFO are completely stored by the other DMA engine units. Thus, there is provided non-locking support so that multiple packets destined for a single rmFIFO are transferred and stored in parallel to predetermined locations in a memory.

  13. Eigensolution of finite element problems in a completely connected parallel architecture

    NASA Technical Reports Server (NTRS)

    Akl, F.; Morel, M.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.

  14. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  15. 10-Qubit Entanglement and Parallel Logic Operations with a Superconducting Circuit

    NASA Astrophysics Data System (ADS)

    Song, Chao; Xu, Kai; Liu, Wuxin; Yang, Chui-ping; Zheng, Shi-Biao; Deng, Hui; Xie, Qiwei; Huang, Keqiang; Guo, Qiujiang; Zhang, Libo; Zhang, Pengfei; Xu, Da; Zheng, Dongning; Zhu, Xiaobo; Wang, H.; Chen, Y.-A.; Lu, C.-Y.; Han, Siyuan; Pan, Jian-Wei

    2017-11-01

    Here we report on the production and tomography of genuinely entangled Greenberger-Horne-Zeilinger states with up to ten qubits connecting to a bus resonator in a superconducting circuit, where the resonator-mediated qubit-qubit interactions are used to controllably entangle multiple qubits and to operate on different pairs of qubits in parallel. The resulting 10-qubit density matrix is probed by quantum state tomography, with a fidelity of 0.668 ±0.025 . Our results demonstrate the largest entanglement created so far in solid-state architectures and pave the way to large-scale quantum computation.

  16. 10-Qubit Entanglement and Parallel Logic Operations with a Superconducting Circuit.

    PubMed

    Song, Chao; Xu, Kai; Liu, Wuxin; Yang, Chui-Ping; Zheng, Shi-Biao; Deng, Hui; Xie, Qiwei; Huang, Keqiang; Guo, Qiujiang; Zhang, Libo; Zhang, Pengfei; Xu, Da; Zheng, Dongning; Zhu, Xiaobo; Wang, H; Chen, Y-A; Lu, C-Y; Han, Siyuan; Pan, Jian-Wei

    2017-11-03

    Here we report on the production and tomography of genuinely entangled Greenberger-Horne-Zeilinger states with up to ten qubits connecting to a bus resonator in a superconducting circuit, where the resonator-mediated qubit-qubit interactions are used to controllably entangle multiple qubits and to operate on different pairs of qubits in parallel. The resulting 10-qubit density matrix is probed by quantum state tomography, with a fidelity of 0.668±0.025. Our results demonstrate the largest entanglement created so far in solid-state architectures and pave the way to large-scale quantum computation.

  17. Perspectives on an education in computational biology and medicine.

    PubMed

    Rubinstein, Jill C

    2012-09-01

    The mainstream application of massively parallel, high-throughput assays in biomedical research has created a demand for scientists educated in Computational Biology and Bioinformatics (CBB). In response, formalized graduate programs have rapidly evolved over the past decade. Concurrently, there is increasing need for clinicians trained to oversee the responsible translation of CBB research into clinical tools. Physician-scientists with dedicated CBB training can facilitate such translation, positioning themselves at the intersection between computational biomedical research and medicine. This perspective explores key elements of the educational path to such a position, specifically addressing: 1) evolving perceptions of the role of the computational biologist and the impact on training and career opportunities; 2) challenges in and strategies for obtaining the core skill set required of a biomedical researcher in a computational world; and 3) how the combination of CBB with medical training provides a logical foundation for a career in academic medicine and/or biomedical research.

  18. Two-dimensional radiant energy array computers and computing devices

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III (Inventor)

    1976-01-01

    Two dimensional digital computers and computer devices operate in parallel on rectangular arrays of digital radiant energy optical signal elements which are arranged in ordered rows and columns. Logic gate devices receive two input arrays and provide an output array having digital states dependent only on the digital states of the signal elements of the two input arrays at corresponding row and column positions. The logic devices include an array of photoconductors responsive to at least one of the input arrays for either selectively accelerating electrons to a phosphor output surface, applying potentials to an electroluminescent output layer, exciting an array of discrete radiant energy sources, or exciting a liquid crystal to influence crystal transparency or reflectivity.

  19. Structured Analysis/Design - LSA Task 101, Early Logistic Support Analysis Strategy, Subtask 101.2.1, Develop Early LSA Strategy

    DTIC Science & Technology

    1990-07-01

    replacing "logic diagrams" or "flow charts") to aid in coordinating the functions to be performed by a computer program and its associated Inputs...ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT ITASK IWORK UNIT ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE...the analysis. Both the logical model and detailed procedures are used to develop the application software programs which will be provided to Government

  20. A system for programming experiments and for recording and analyzing data automatically1

    PubMed Central

    Herrick, Robert M.; Denelsbeck, John S.

    1963-01-01

    A system designed for use in complex operant conditioning experiments is described. Some of its key features are: (a) plugboards that permit the experimenter to change either from one program to another or from one analysis to another in less than a minute, (b) time-sharing of permanently-wired, electronic logic components, (c) recordings suitable for automatic analyses. Included are flow diagrams of the system and sample logic diagrams for programming experiments and for analyzing data. ImagesFig. 4. PMID:14055967

  1. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  2. Optical programmable Boolean logic unit.

    PubMed

    Chattopadhyay, Tanay

    2011-11-10

    Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.

  3. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  4. Federal Highway Administration research and technology evaluation final report : Eco-Logical

    DOT National Transportation Integrated Search

    2018-03-01

    This report documents an evaluation of Federal Highway Administrations (FHWA) Research and Technology Programs activities on the implementation of the Eco-Logical approach by State transportation departments and metropolitan planning organizati...

  5. Logic programming to infer complex RNA expression patterns from RNA-seq data.

    PubMed

    Weirick, Tyler; Militello, Giuseppe; Ponomareva, Yuliya; John, David; Döring, Claudia; Dimmeler, Stefanie; Uchida, Shizuka

    2018-03-01

    To meet the increasing demand in the field, numerous long noncoding RNA (lncRNA) databases are available. Given many lncRNAs are specifically expressed in certain cell types and/or time-dependent manners, most lncRNA databases fall short of providing such profiles. We developed a strategy using logic programming to handle the complex organization of organs, their tissues and cell types as well as gender and developmental time points. To showcase this strategy, we introduce 'RenalDB' (http://renaldb.uni-frankfurt.de), a database providing expression profiles of RNAs in major organs focusing on kidney tissues and cells. RenalDB uses logic programming to describe complex anatomy, sample metadata and logical relationships defining expression, enrichment or specificity. We validated the content of RenalDB with biological experiments and functionally characterized two long intergenic noncoding RNAs: LOC440173 is important for cell growth or cell survival, whereas PAXIP1-AS1 is a regulator of cell death. We anticipate RenalDB will be used as a first step toward functional studies of lncRNAs in the kidney.

  6. Load influence on gear noise. [mathematical model for determining acoustic pressure level as function of load

    NASA Technical Reports Server (NTRS)

    Merticaru, V.

    1974-01-01

    An original mathematical model is proposed to derive equations for calculation of gear noise. These equations permit the acoustic pressure level to be determined as a function of load. Application of this method to three parallel gears is reported. The logical calculation scheme is given, as well as the results obtained.

  7. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    PubMed Central

    Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi

    2013-01-01

    Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382

  8. Application of logic models in a large scientific research program.

    PubMed

    O'Keefe, Christine M; Head, Richard J

    2011-08-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts mission-driven scientific research focussed on delivering results with relevance and impact for Australia, where impact is defined and measured in economic, environmental and social terms at the national level. The Australian Government has recently signalled an increasing emphasis on performance assessment and evaluation, which in the CSIRO context implies an increasing emphasis on ensuring and demonstrating the impact of its research programs. CSIRO continues to develop and improve its approaches to impact planning and evaluation, including conducting a trial of a program logic approach in the CSIRO Preventative Health National Research Flagship. During the trial, improvements were observed in clarity of the research goals and path to impact, as well as in alignment of science and support function activities with national challenge goals. Further benefits were observed in terms of communication of the goals and expected impact of CSIRO's research programs both within CSIRO and externally. The key lesson learned was that significant value was achieved through the process itself, as well as the outcome. Recommendations based on the CSIRO trial may be of interest to managers of scientific research considering developing similar logic models for their research projects. The CSIRO experience has shown that there are significant benefits to be gained, especially if the project participants have a major role in the process of developing the logic model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Flight Design System-1 System Design Document. Volume 9: Executive logic flow, program design language

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The detailed logic flow for the Flight Design System Executive is presented. The system is designed to provide the hardware/software capability required for operational support of shuttle flight planning.

  10. Molecular implementation of simple logic programs.

    PubMed

    Ran, Tom; Kaplan, Shai; Shapiro, Ehud

    2009-10-01

    Autonomous programmable computing devices made of biomolecules could interact with a biological environment and be used in future biological and medical applications. Biomolecular implementations of finite automata and logic gates have already been developed. Here, we report an autonomous programmable molecular system based on the manipulation of DNA strands that is capable of performing simple logical deductions. Using molecular representations of facts such as Man(Socrates) and rules such as Mortal(X) <-- Man(X) (Every Man is Mortal), the system can answer molecular queries such as Mortal(Socrates)? (Is Socrates Mortal?) and Mortal(X)? (Who is Mortal?). This biomolecular computing system compares favourably with previous approaches in terms of expressive power, performance and precision. A compiler translates facts, rules and queries into their molecular representations and subsequently operates a robotic system that assembles the logical deductions and delivers the result. This prototype is the first simple programming language with a molecular-scale implementation.

  11. Development, implementation, and evaluation of a community- and hospital-based respiratory syncytial virus prophylaxis program.

    PubMed

    Bracht, Marianne; Heffer, Michael; O'Brien, Karel

    2005-02-01

    To implement and deliver a respiratory syncytial virus prophylaxis (RSVP) program in response to the Canadian Pediatric Society recommendations. A novel program was designed to provide inpatient RSVP for at-risk infants cared for in 1 tertiary care newborn intensive care unit (NICU). This inpatient program was part of a coordinated approach to RSVP, designed and implemented by 3 hospitals. An RSVP program logic model was created and used by a multidisciplinary team to evaluate the in-house program and identify areas of program activity requiring improvement. Following the 2000 to 2001 RSV season, a compliance and outcomes audit was performed in the tertiary center; 193 infants were enrolled in the RSVP program and 162 infants had received RSVP in the NICU [Mean = 1.64 doses]. Telephone follow-up with the parents of discharged infants identified that 159 infants (98%) had successfully completed their full course of RSVP. Using the RSVP program logic model, 5 areas for program improvement were identified including infant recruitment, patient transfer/discharge processes, product procurement, preparation/distribution/administration of doses, and healthcare team communication. Interdisciplinary collaboration is an important factor in the success of the RSVP program and has supported a consistent model of care for the delivery of RSVP. The program logic model provided a useful structure to systematically review the RSVP program in this organization.

  12. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    PubMed

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  13. A fast and low-cost genotyping method for hepatitis B virus based on pattern recognition in point-of-care settings

    PubMed Central

    Qiu, Xianbo; Song, Liuwei; Yang, Shuo; Guo, Meng; Yuan, Quan; Ge, Shengxiang; Min, Xiaoping; Xia, Ningshao

    2016-01-01

    A fast and low-cost method for HBV genotyping especially for genotypes A, B, C and D was developed and tested. A classifier was used to detect and analyze a one-step immunoassay lateral flow strip functionalized with genotype-specific monoclonal antibodies (mAbs) on multiple capture lines in the form of pattern recognition for point-of-care (POC) diagnostics. The fluorescent signals from the capture lines and the background of the strip were collected via multiple optical channels in parallel. A digital HBV genotyping model, whose inputs are the fluorescent signals and outputs are a group of genotype-specific digital binary codes (0/1), was developed based on the HBV genotyping strategy. Meanwhile, a companion decoding table was established to cover all possible pairing cases between the states of a group of genotype-specific digital binary codes and the HBV genotyping results. A logical analyzing module was constructed to process the detected signals in parallel without program control, and its outputs were used to drive a set of LED indicators, which determine the HBV genotype. Comparing to the nucleic acid analysis to HBV viruses, much faster HBV genotyping with significantly lower cost can be obtained with the developed method. PMID:27306485

  14. Right-Brain/Left-Brain Integrated Associative Processor Employing Convertible Multiple-Instruction-Stream Multiple-Data-Stream Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Hitoshi; Ogawa, Makoto; Shibata, Tadashi

    2005-04-01

    A very large scale integrated circuit (VLSI) architecture for a multiple-instruction-stream multiple-data-stream (MIMD) associative processor has been proposed. The processor employs an architecture that enables seamless switching from associative operations to arithmetic operations. The MIMD element is convertible to a regular central processing unit (CPU) while maintaining its high performance as an associative processor. Therefore, the MIMD associative processor can perform not only on-chip perception, i.e., searching for the vector most similar to an input vector throughout the on-chip cache memory, but also arithmetic and logic operations similar to those in ordinary CPUs, both simultaneously in parallel processing. Three key technologies have been developed to generate the MIMD element: associative-operation-and-arithmetic-operation switchable calculation units, a versatile register control scheme within the MIMD element for flexible operations, and a short instruction set for minimizing the memory size for program storage. Key circuit blocks were designed and fabricated using 0.18 μm complementary metal-oxide-semiconductor (CMOS) technology. As a result, the full-featured MIMD element is estimated to be 3 mm2, showing the feasibility of an 8-parallel-MIMD-element associative processor in a single chip of 5 mm× 5 mm.

  15. Programmable logic construction kits for hyper-real-time neuronal modeling.

    PubMed

    Guerrero-Rivera, Ruben; Morrison, Abigail; Diesmann, Markus; Pearce, Tim C

    2006-11-01

    Programmable logic designs are presented that achieve exact integration of leaky integrate-and-fire soma and dynamical synapse neuronal models and incorporate spike-time dependent plasticity and axonal delays. Highly accurate numerical performance has been achieved by modifying simpler forward-Euler-based circuitry requiring minimal circuit allocation, which, as we show, behaves equivalently to exact integration. These designs have been implemented and simulated at the behavioral and physical device levels, demonstrating close agreement with both numerical and analytical results. By exploiting finely grained parallelism and single clock cycle numerical iteration, these designs achieve simulation speeds at least five orders of magnitude faster than the nervous system, termed here hyper-real-time operation, when deployed on commercially available field-programmable gate array (FPGA) devices. Taken together, our designs form a programmable logic construction kit of commonly used neuronal model elements that supports the building of large and complex architectures of spiking neuron networks for real-time neuromorphic implementation, neurophysiological interfacing, or efficient parameter space investigations.

  16. A.I.-based real-time support for high performance aircraft operations

    NASA Technical Reports Server (NTRS)

    Vidal, J. J.

    1985-01-01

    Artificial intelligence (AI) based software and hardware concepts are applied to the handling system malfunctions during flight tests. A representation of malfunction procedure logic using Boolean normal forms are presented. The representation facilitates the automation of malfunction procedures and provides easy testing for the embedded rules. It also forms a potential basis for a parallel implementation in logic hardware. The extraction of logic control rules, from dynamic simulation and their adaptive revision after partial failure are examined. It uses a simplified 2-dimensional aircraft model with a controller that adaptively extracts control rules for directional thrust that satisfies a navigational goal without exceeding pre-established position and velocity limits. Failure recovery (rule adjusting) is examined after partial actuator failure. While this experiment was performed with primitive aircraft and mission models, it illustrates an important paradigm and provided complexity extrapolations for the proposed extraction of expertise from simulation, as discussed. The use of relaxation and inexact reasoning in expert systems was also investigated.

  17. (Re) Making the Procrustean Bed? Standardization and Customization as Competing Logics in Healthcare.

    PubMed

    Mannion, Russell; Exworthy, Mark

    2017-03-28

    Recent years have witnessed a parallel and seemingly contradictory trend towards both the standardization and the customization of healthcare and medical treatment. Here, we explore what is meant by 'standardization' and 'customization' in healthcare settings and explore the implications of these changes for healthcare delivery. We frame the paradox of these divergent and opposing factors in terms of institutional logics - the socially constructed rules, practices and beliefs which perpetuate institutional behaviour. As the tension between standardization and customization is fast becoming a critical fault-line within many health systems, there remains an urgent need for more sustained work exploring how these competing logics are articulated, adapted, resisted and co-exist on the front line of care delivery. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  18. Implementation and Evaluation of Microcomputer Systems for the Republic of Turkey’s Naval Ships.

    DTIC Science & Technology

    1986-03-01

    important database design tool for both logical and physical database design, such as flowcharts or pseudocodes are used for program design. Logical...string manipulation in FORTRAN is difficult but not impossible. BASIC ( Beginners All-Purpose Symbolic Instruction Code): Basic is currently the most...63 APPENDIX B GLOSSARY/ACRONYM LIST AC Alternating Current AP Application Program BASIC Beginners All-purpose Symbolic Instruction Code CCP

  19. Experimental demonstration of programmable multi-functional spin logic cell based on spin Hall effect

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Wan, C. H.; Yuan, Z. H.; Fang, C.; Kong, W. J.; Wu, H.; Zhang, Q. T.; Tao, B. S.; Han, X. F.

    2017-04-01

    Confronting with the gigantic volume of data produced every day, raising integration density by reducing the size of devices becomes harder and harder to meet the ever-increasing demand for high-performance computers. One feasible path is to actualize more logic functions in one cell. In this respect, we experimentally demonstrate a prototype spin-orbit torque based spin logic cell integrated with five frequently used logic functions (AND, OR, NOT, NAND and NOR). The cell can be easily programmed and reprogrammed to perform desired function. Furthermore, the information stored in cells is symmetry-protected, making it possible to expand into logic gate array where the cell can be manipulated one by one without changing the information of other undesired cells. This work provides a prospective example of multi-functional spin logic cell with reprogrammability and nonvolatility, which will advance the application of spin logic devices.

  20. Surface-confined assemblies and polymers for molecular logic.

    PubMed

    de Ruiter, Graham; van der Boom, Milko E

    2011-08-16

    Stimuli responsive materials are capable of mimicking the operation characteristics of logic gates such as AND, OR, NOR, and even flip-flops. Since the development of molecular sensors and the introduction of the first AND gate in solution by de Silva in 1993, Molecular (Boolean) Logic and Computing (MBLC) has become increasingly popular. In this Account, we present recent research activities that focus on MBLC with electrochromic polymers and metal polypyridyl complexes on a solid support. Metal polypyridyl complexes act as useful sensors to a variety of analytes in solution (i.e., H(2)O, Fe(2+/3+), Cr(6+), NO(+)) and in the gas phase (NO(x) in air). This information transfer, whether the analyte is present, is based on the reversible redox chemistry of the metal complexes, which are stable up to 200 °C in air. The concurrent changes in the optical properties are nondestructive and fast. In such a setup, the input is directly related to the output and, therefore, can be represented by one-input logic gates. These input-output relationships are extendable for mimicking the diverse functions of essential molecular logic gates and circuits within a set of Boolean algebraic operations. Such a molecular approach towards Boolean logic has yielded a series of proof-of-concept devices: logic gates, multiplexers, half-adders, and flip-flop logic circuits. MBLC is a versatile and, potentially, a parallel approach to silicon circuits: assemblies of these molecular gates can perform a wide variety of logic tasks through reconfiguration of their inputs. Although these developments do not require a semiconductor blueprint, similar guidelines such as signal propagation, gate-to-gate communication, propagation delay, and combinatorial and sequential logic will play a critical role in allowing this field to mature. For instance, gate-to-gate communication by chemical wiring of the gates with metal ions as electron carriers results in the integration of stand-alone systems: the output of one gate is used as the input for another gate. Using the same setup, we were able to display both combinatorial and sequential logic. We have demonstrated MBLC by coupling electrochemical inputs with optical readout, which resulted in various logic architectures built on a redox-active, functionalized surface. Electrochemically operated sequential logic systems such as flip-flops, multivalued logic, and multistate memory could enhance computational power without increasing spatial requirements. Applying multivalued digits in data storage could exponentially increase memory capacity. Furthermore, we evaluate the pros and cons of MBLC and identify targets for future research in this Account. © 2011 American Chemical Society

  1. Optical reversible programmable Boolean logic unit.

    PubMed

    Chattopadhyay, Tanay

    2012-07-20

    Computing with reversibility is the only way to avoid dissipation of energy associated with bit erase. So, a reversible microprocessor is required for future computing. In this paper, a design of a simple all-optical reversible programmable processor is proposed using a polarizing beam splitter, liquid crystal-phase spatial light modulators, a half-wave plate, and plane mirrors. This circuit can perform 16 logical operations according to three programming inputs. Also, inputs can be easily recovered from the outputs. It is named the "reversible programmable Boolean logic unit (RPBLU)." The logic unit is the basic building block of many complex computational operations. Hence the design is important in sense. Two orthogonally polarized lights are defined here as two logical states, respectively.

  2. Bayesian Logic Programs for Plan Recognition and Machine Reading

    DTIC Science & Technology

    2012-12-01

    models is that they can handle both uncertainty and structured/ relational data. As a result, they are widely used in domains like social network...data. As a result, they are widely used in domains like social net- work analysis, biological data analysis, and natural language processing. Bayesian...the Story Understanding data set. (b) The logical representation of the observations. (c) The set of ground rules obtained from logical abduction

  3. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  4. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  5. Integrated electronics for time-resolved array of single-photon avalanche diodes

    NASA Astrophysics Data System (ADS)

    Acconcia, G.; Crotti, M.; Rech, I.; Ghioni, M.

    2013-12-01

    The Time Correlated Single Photon Counting (TCSPC) technique has reached a prominent position among analytical methods employed in a great variety of fields, from medicine and biology (fluorescence spectroscopy) to telemetry (laser ranging) and communication (quantum cryptography). Nevertheless the development of TCSPC acquisition systems featuring both a high number of parallel channels and very high performance is still an open challenge: to satisfy the tight requirements set by the applications, a fully parallel acquisition system requires not only high efficiency single photon detectors but also a read-out electronics specifically designed to obtain the highest performance in conjunction with these sensors. To this aim three main blocks have been designed: a gigahertz bandwidth front-end stage to directly read the custom technology SPAD array avalanche current, a reconfigurable logic to route the detectors output signals to the acquisition chain and an array of time measurement circuits capable of recording the photon arrival times with picoseconds time resolution and a very high linearity. An innovative architecture based on these three circuits will feature a very high number of detectors to perform a truly parallel spatial or spectral analysis and a smaller number of high performance time-to-amplitude converter offering very high performance and a very high conversion frequency while limiting the area occupation and power dissipation. The routing logic will make the dynamic connection between the two arrays possible in order to guarantee that no information gets lost.

  6. The Father Friendly Initiative within Families: Using a logic model to develop program theory for a father support program.

    PubMed

    Gervais, Christine; de Montigny, Francine; Lacharité, Carl; Dubeau, Diane

    2015-10-01

    The transition to fatherhood, with its numerous challenges, has been well documented. Likewise, fathers' relationships with health and social services have also begun to be explored. Yet despite the problems fathers experience in interactions with healthcare services, few programs have been developed for them. To explain this, some authors point to the difficulty practitioners encounter in developing and structuring the theory of programs they are trying to create to promote and support father involvement (Savaya, R., & Waysman, M. (2005). Administration in Social Work, 29(2), 85), even when such theory is key to a program's effectiveness (Chen, H.-T. (2005). Practical program evaluation. Thousand Oaks, CA: Sage Publications). The objective of the present paper is to present a tool, the logic model, to bridge this gap and to equip practitioners for structuring program theory. This paper addresses two questions: (1) What would be a useful instrument for structuring the development of program theory in interventions for fathers? (2) How would the concepts of a father involvement program best be organized? The case of the Father Friendly Initiative within Families (FFIF) program is used to present and illustrate six simple steps for developing a logic model that are based on program theory and demonstrate its relevance. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  7. Parallel programming with Easy Java Simulations

    NASA Astrophysics Data System (ADS)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  8. Scaling up digital circuit computation with DNA strand displacement cascades.

    PubMed

    Qian, Lulu; Winfree, Erik

    2011-06-03

    To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.

  9. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  10. Runtime verification of embedded real-time systems.

    PubMed

    Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg

    We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.

  11. Strategic Mobility 21: Modeling, Simulation, and Analysis

    DTIC Science & Technology

    2010-04-14

    using AnyLogic , which is a Java programmed, multi-method simulation modeling tool developed by XJ Technologies. The last section examines the academic... simulation model from an Arena platform to an AnyLogic based Web Service. MATLAB is useful for small problems with few nodes, but GAMS/CPLEX is better... Transportation Modeling Studio TM . The SCASN modeling and simulation program was designed to be generic in nature to allow for use by both commercial and

  12. Logic Programming as an Inference Engine for Non-Monotonic Reasoning

    DTIC Science & Technology

    1991-11-11

    Mathematical Sciences . ... University of Texas at El Paso AdI!ar, El Pazo , TX 79968-0514 [ A , (teodor math.ep.utexas.edu) Dist November 11, 1991 Title...Przymusinska, L. Pereira and D.S. Warren. Significant progress has been made towards both theoretical and algorithmic foundations of a non-monotonic...reasoning system based on logic programming. An implementation of such a system, limited to circumscrip- tive thoories, has been also completed. 14

  13. Difference to Inference: teaching logical and statistical reasoning through on-line interactivity.

    PubMed

    Malloy, T E

    2001-05-01

    Difference to Inference is an on-line JAVA program that simulates theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support learning of the thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades.

  14. Logical Form as a Determinant of Cognitive Processes

    NASA Astrophysics Data System (ADS)

    van Lambalgen, Michiel

    We discuss a research program on reasoning patterns in subjects with autism, showing that they fail to engage in certain forms of non-monotonic reasoning that come naturally to neurotypical subjects. The striking reasoning patterns of autists occur both in verbal and in non-verbal tasks. Upon formalising the relevant non-verbal tasks, one sees that their logical form is the same as that of the verbal tasks. This suggests that logical form can play a causal role in cognitive processes, and we suggest that this logical form is actually embodied in the cognitive capacity called 'executive function'.

  15. Software Safety Assurance of Programmable Logic

    NASA Technical Reports Server (NTRS)

    Berens, Kalynnda

    2002-01-01

    Programmable Logic (PLC, FPGA, ASIC) devices are hybrids - hardware devices that are designed and programmed like software. As such, they fall in an assurance gray area. Programmable Logic is usually tested and verified as hardware, and the software aspects are ignored, potentially leading to safety or mission success concerns. The objective of this proposal is to first determine where and how Programmable Logic (PL) is used within NASA and document the current methods of assurance. Once that is known, raise awareness of the PL software aspects within the NASA engineering community and provide guidance for the use and assurance of PL form a software perspective.

  16. Catalytic nucleic acids (DNAzymes) as functional units for logic gates and computing circuits: from basic principles to practical applications.

    PubMed

    Orbach, Ron; Willner, Bilha; Willner, Itamar

    2015-03-11

    This feature article addresses the implementation of catalytic nucleic acids as functional units for the construction of logic gates and computing circuits, and discusses the future applications of these systems. The assembly of computational modules composed of DNAzymes has led to the operation of a universal set of logic gates, to field programmable logic gates and computing circuits, to the development of multiplexers/demultiplexers, and to full-adder systems. Also, DNAzyme cascades operating as logic gates and computing circuits were demonstrated. DNAzyme logic systems find important practical applications. These include the use of DNAzyme-based systems for sensing and multiplexed analyses, for the development of controlled release and drug delivery systems, for regulating intracellular biosynthetic pathways, and for the programmed synthesis and operation of cascades.

  17. Towards programming languages for genetic engineering of living cells

    PubMed Central

    Pedersen, Michael; Phillips, Andrew

    2009-01-01

    Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts. PMID:19369220

  18. Towards programming languages for genetic engineering of living cells.

    PubMed

    Pedersen, Michael; Phillips, Andrew

    2009-08-06

    Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts.

  19. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  20. Effectiveness of a Computer-Based Training Program of Attention and Memory in Patients with Acquired Brain Damage

    PubMed Central

    Fernandez, Elizabeth; Bergado Rosado, Jorge A.; Rodriguez Perez, Daymi; Salazar Santana, Sonia; Torres Aguilar, Maydane; Bringas, Maria Luisa

    2017-01-01

    Many training programs have been designed using modern software to restore the impaired cognitive functions in patients with acquired brain damage (ABD). The objective of this study was to evaluate the effectiveness of a computer-based training program of attention and memory in patients with ABD, using a two-armed parallel group design, where the experimental group (n = 50) received cognitive stimulation using RehaCom software, and the control group (n = 30) received the standard cognitive stimulation (non-computerized) for eight weeks. In order to assess the possible cognitive changes after the treatment, a post-pre experimental design was employed using the following neuropsychological tests: Wechsler Memory Scale (WMS) and Trail Making test A and B. The effectiveness of the training procedure was statistically significant (p < 0.05) when it established the comparison between the performance in these scales, before and after the training period, in each patient and between the two groups. The training group had statistically significant (p < 0.001) changes in focused attention (Trail A), two subtests (digit span and logical memory), and the overall score of WMS. Finally, we discuss the advantages of computerized training rehabilitation and further directions of this line of work. PMID:29301194

  1. Bilingual parallel programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less

  2. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  3. A survey of advancements in nucleic acid-based logic gates and computing for applications in biotechnology and biomedicine.

    PubMed

    Wu, Cuichen; Wan, Shuo; Hou, Weijia; Zhang, Liqin; Xu, Jiehua; Cui, Cheng; Wang, Yanyue; Hu, Jun; Tan, Weihong

    2015-03-04

    Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gate and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications.

  4. A Survey of Advancements in Nucleic Acid-based Logic Gates and Computing for Applications in Biotechnology and biomedicine

    PubMed Central

    Wu, Cuichen; Wan, Shuo; Hou, Weijia; Zhang, Liqin; Xu, Jiehua; Cui, Cheng; Wang, Yanyue; Hu, Jun

    2015-01-01

    Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gating and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications. PMID:25597946

  5. Programming Cell Adhesion for On-Chip Sequential Boolean Logic Functions.

    PubMed

    Qu, Xiangmeng; Wang, Shaopeng; Ge, Zhilei; Wang, Jianbang; Yao, Guangbao; Li, Jiang; Zuo, Xiaolei; Shi, Jiye; Song, Shiping; Wang, Lihua; Li, Li; Pei, Hao; Fan, Chunhai

    2017-08-02

    Programmable remodelling of cell surfaces enables high-precision regulation of cell behavior. In this work, we developed in vitro constructed DNA-based chemical reaction networks (CRNs) to program on-chip cell adhesion. We found that the RGD-functionalized DNA CRNs are entirely noninvasive when interfaced with the fluidic mosaic membrane of living cells. DNA toehold with different lengths could tunably alter the release kinetics of cells, which shows rapid release in minutes with the use of a 6-base toehold. We further demonstrated the realization of Boolean logic functions by using DNA strand displacement reactions, which include multi-input and sequential cell logic gates (AND, OR, XOR, and AND-OR). This study provides a highly generic tool for self-organization of biological systems.

  6. Execution of Educational Mechanical Production Programs for School Children

    NASA Astrophysics Data System (ADS)

    Itoh, Nobuhide; Itoh, Goroh; Shibata, Takayuki

    The authors are conducting experience-based engineering educational programs for elementary and junior high school students with the aim to provide a chance for them to experience mechanical production. As part of this endeavor, we planned and conducted a program called “Fabrication of Original Magnet Plates by Casting” for elementary school students. This program included a course for leading nature laws and logical thinking method. Prior to the program, a preliminary program was applied to school teachers to get comments and to modify for the program accordingly. The children responded excellently to the production process which realizes their ideas, but it was found that the course on natural laws and logical methods need to be improved to draw their interest and attention. We will continue to plan more effective programs, deepening ties with the local community.

  7. An efficient annealing in Boltzmann machine in Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz

    2012-09-01

    This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.

  8. Detecting Payload Attacks on Programmable Logic Controllers (PLCs)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Huan

    Programmable logic controllers (PLCs) play critical roles in industrial control systems (ICS). Providing hardware peripherals and firmware support for control programs (i.e., a PLC’s “payload”) written in languages such as ladder logic, PLCs directly receive sensor readings and control ICS physical processes. An attacker with access to PLC development software (e.g., by compromising an engineering workstation) can modify the payload program and cause severe physical damages to the ICS. To protect critical ICS infrastructure, we propose to model runtime behaviors of legitimate PLC payload program and use runtime behavior monitoring in PLC firmware to detect payload attacks. By monitoring themore » I/O access patterns, network access patterns, as well as payload program timing characteristics, our proposed firmware-level detection mechanism can detect abnormal runtime behaviors of malicious PLC payload. Using our proof-of-concept implementation, we evaluate the memory and execution time overhead of implementing our proposed method and find that it is feasible to incorporate our method into existing PLC firmware. In addition, our evaluation results show that a wide variety of payload attacks can be effectively detected by our proposed approach. The proposed firmware-level payload attack detection scheme complements existing bumpin- the-wire solutions (e.g., external temporal-logic-based model checkers) in that it can detect payload attacks that violate realtime requirements of ICS operations and does not require any additional apparatus.« less

  9. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  10. Use of LOGIC to support lidar operations

    NASA Astrophysics Data System (ADS)

    Davis-Lunde, Kimberley; Jugan, Laurie A.; Shoemaker, J. Todd

    1999-10-01

    The Naval Oceanographic Office (NAVOCEANO) and Planning Systems INcorporated are developing the Littoral Optics Geospatial Integrated Capability (LOGIC). LOGIC supports NAVOCEANO's directive to assess the impact of the environment on Fleet systems in areas of operational interest. LOGIC is based in the Geographic Information System (GIS) ARC/INFO and offers a method to view and manipulate optics and ancillary data to support emerging Fleet lidar systems. LOGIC serves as a processing (as required) and quality-checking mechanism for data entering NAVOCEANO's Data Warehouse and handles both remotely sensed and in-water data. LOGIC provides a link between these data and the GIS-based Graphical User Interface, allowing the user to select data manipulation routines and/or system support products. The results of individual modules are displayed via the GIS to provide such products as lidar system performance, laser penetration depth, and asset vulnerability from a lidar threat. LOGIC is being developed for integration into other NAVOCEANO programs, most notably for Comprehensive Environmental Assessment System, an established tool supporting sonar-based systems. The prototype for LOGIC was developed for the Yellow Sea, focusing on a diver visibility support product.

  11. Research in mathematical theory of computation. [computer programming applications

    NASA Technical Reports Server (NTRS)

    Mccarthy, J.

    1973-01-01

    Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.

  12. Monte Carlo simulation of the nuclear-electromagnetic cascade development and the energy response of ionization spectrometers

    NASA Technical Reports Server (NTRS)

    Jones, W. V.

    1973-01-01

    Modifications to the basic computer program for performing the simulations are reported. The major changes include: (1) extension of the calculations to include the development of cascades initiated by heavy nuclei, (2) improved treatment of the nuclear disintegrations which occur during the interactions of hadrons in heavy absorbers, (3) incorporation of accurate multi-pion final-state cross sections for various interactions at accelerator energies, (4) restructuring of the program logic so that calculations can be made for sandwich-type detectors, and (5) logic modifications related to execution of the program.

  13. Program Monitoring with LTL in EAGLE

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2004-01-01

    We briefly present a rule-based framework called EAGLE, shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics (MTL), interval logics, forms of quantified temporal logics, and so on. In this paper we focus on a linear temporal logic (LTL) specialization of EAGLE. For an initial formula of size m, we establish upper bounds of O(m(sup 2)2(sup m)log m) and O(m(sup 4)2(sup 2m)log(sup 2) m) for the space and time complexity, respectively, of single step evaluation over an input trace. This bound is close to the lower bound O(2(sup square root m) for future-time LTL presented. EAGLE has been successfully used, in both LTL and metric LTL forms, to test a real-time controller of an experimental NASA planetary rover.

  14. The Necessity of Functional Analysis for Space Exploration Programs

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Breidenthal, Julian C.

    2011-01-01

    As NASA moves toward expanded commercial spaceflight within its human exploration capability, there is increased emphasis on how to allocate responsibilities between government and commercial organizations to achieve coordinated program objectives. The practice of program-level functional analysis offers an opportunity for improved understanding of collaborative functions among heterogeneous partners. Functional analysis is contrasted with the physical analysis more commonly done at the program level, and is shown to provide theoretical performance, risk, and safety advantages beneficial to a government-commercial partnership. Performance advantages include faster convergence to acceptable system solutions; discovery of superior solutions with higher commonality, greater simplicity and greater parallelism by substituting functional for physical redundancy to achieve robustness and safety goals; and greater organizational cohesion around program objectives. Risk advantages include avoidance of rework by revelation of some kinds of architectural and contractual mismatches before systems are specified, designed, constructed, or integrated; avoidance of cost and schedule growth by more complete and precise specifications of cost and schedule estimates; and higher likelihood of successful integration on the first try. Safety advantages include effective delineation of must-work and must-not-work functions for integrated hazard analysis, the ability to formally demonstrate completeness of safety analyses, and provably correct logic for certification of flight readiness. The key mechanism for realizing these benefits is the development of an inter-functional architecture at the program level, which reveals relationships between top-level system requirements that would otherwise be invisible using only a physical architecture. This paper describes the advantages and pitfalls of functional analysis as a means of coordinating the actions of large heterogeneous organizations for space exploration programs.

  15. Evaluation of a Postdischarge Call System Using the Logic Model.

    PubMed

    Frye, Timothy C; Poe, Terri L; Wilson, Marisa L; Milligan, Gary

    2018-02-01

    This mixed-method study was conducted to evaluate a postdischarge call program for congestive heart failure patients at a major teaching hospital in the southeastern United States. The program was implemented based on the premise that it would improve patient outcomes and overall quality of life, but it had never been evaluated for effectiveness. The Logic Model was used to evaluate the input of key staff members to determine whether the outputs and results of the program matched the expectations of the organization. Interviews, online surveys, reviews of existing patient outcome data, and reviews of publicly available program marketing materials were used to ascertain current program output. After analyzing both qualitative and quantitative data from the evaluation, recommendations were made to the organization to improve the effectiveness of the program.

  16. ICASE Semiannual Report, 1 April 1990 - 30 September 1990

    DTIC Science & Technology

    1990-11-01

    underlies parallel simulation protocols that synchronize based on logical time (all known approaches). This framework describes a suf- ficient set of...conducted primarily by visiting scientists from universities and from industry, who have resident appointments for limited periods of time , and by consultants...wave equation with point sources and semireflecting impedance boundary conditions. For sources that are piece- wise polynomial in time we get a finite

  17. A Community Health Worker "logic model": towards a theory of enhanced performance in low- and middle-income countries.

    PubMed

    Naimoli, Joseph F; Frymus, Diana E; Wuliji, Tana; Franco, Lynne M; Newsome, Martha H

    2014-10-02

    There has been a resurgence of interest in national Community Health Worker (CHW) programs in low- and middle-income countries (LMICs). A lack of strong research evidence persists, however, about the most efficient and effective strategies to ensure optimal, sustained performance of CHWs at scale. To facilitate learning and research to address this knowledge gap, the authors developed a generic CHW logic model that proposes a theoretical causal pathway to improved performance. The logic model draws upon available research and expert knowledge on CHWs in LMICs. Construction of the model entailed a multi-stage, inductive, two-year process. It began with the planning and implementation of a structured review of the existing research on community and health system support for enhanced CHW performance. It continued with a facilitated discussion of review findings with experts during a two-day consultation. The process culminated with the authors' review of consultation-generated documentation, additional analysis, and production of multiple iterations of the model. The generic CHW logic model posits that optimal CHW performance is a function of high quality CHW programming, which is reinforced, sustained, and brought to scale by robust, high-performing health and community systems, both of which mobilize inputs and put in place processes needed to fully achieve performance objectives. Multiple contextual factors can influence CHW programming, system functioning, and CHW performance. The model is a novel contribution to current thinking about CHWs. It places CHW performance at the center of the discussion about CHW programming, recognizes the strengths and limitations of discrete, targeted programs, and is comprehensive, reflecting the current state of both scientific and tacit knowledge about support for improving CHW performance. The model is also a practical tool that offers guidance for continuous learning about what works. Despite the model's limitations and several challenges in translating the potential for learning into tangible learning, the CHW generic logic model provides a solid basis for exploring and testing a causal pathway to improved performance.

  18. Modifications to the streamtube curvature program. Volume 1: Program modifications and user's manual. [user manuals (computer programs) for transonic flow of nacelles and intake systems of turbofan engines

    NASA Technical Reports Server (NTRS)

    Ferguson, D. R.; Keith, J. S.

    1975-01-01

    The improvements which have been incorporated in the Streamtube Curvature Program to enhance both its computational and diagnostic capabilities are described. Detailed descriptions are given of the revisions incorporated to more reliably handle the jet stream-external flow interaction at trailing edges. Also presented are the augmented boundary layer procedures and a variety of other program changes relating to program diagnostics and extended solution capabilities. An updated User's Manual, that includes information on the computer program operation, usage, and logical structure, is presented. User documentation includes an outline of the general logical flow of the program and detailed instructions for program usage and operation. From the standpoint of the programmer, the overlay structure is described. The input data, output formats, and diagnostic printouts are covered in detail and illustrated with three typical test cases.

  19. Peptide Logic Circuits Based on Chemoenzymatic Ligation for Programmable Cell Apoptosis.

    PubMed

    Li, Yong; Sun, Sujuan; Fan, Lin; Hu, Shanfang; Huang, Yan; Zhang, Ke; Nie, Zhou; Yao, Shouzhou

    2017-11-20

    A novel and versatile peptide-based bio-logic system capable of regulating cell function is developed using sortase A (SrtA), a peptide ligation enzyme, as a generic processor. By modular peptide design, we demonstrate that mammalian cells apoptosis can be programmed by peptide-based logic operations, including binary and combination gates (AND, INHIBIT, OR, and AND-INHIBIT), and a complex sequential logic circuit (multi-input keypad lock). Moreover, a proof-of-concept peptide regulatory circuit was developed to analyze the expression profile of cell-secreted protein biomarkers and trigger cancer-cell-specific apoptosis. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  1. A Deductive Approach to Computer Programming.

    DTIC Science & Technology

    1986-01-01

    82] K. L. (’lark and S.-A. Thrnlund (editors). Logic Programming, Academic Press (1982). A.R.(’. Studies in Data Processing No. 16. : .(;Goguen and...Tiii Siiillii’>>- oftlie t ralnSforunatloll rukisi 5 (v ielt Since e’ach prodite’ ani (’xjpl’ssiill equliv- * ~ Llil i t’qi ii (ilk t it(’ theo’try...S. Boyer and J S. Moore, A Computational Logic, Academic Press, New York, N.Y., 1979. Brand [751 D. Brand, Proving theorems with the modification

  2. A Logical Design of a Session Services Control Layer of a Distributed Network Architecture for SPLICE (Stock Point Logistics Integrated Communication Environment).

    DTIC Science & Technology

    1984-06-01

    Eacn stock point is autonomous witn respect to how it implements data processing support, as long as it accommodates the Navy Supply Systems Command...has its own data elements, files, programs , transactions, users, reports, and some have additional hardware. To augment them all and not force redesign... programs are written to request session establishments among them using only logical addressing names (mailboxes) whicn are independent from physical

  3. Playing Tic-Tac-Toe with a Sugar-Based Molecular Computer.

    PubMed

    Elstner, M; Schiller, A

    2015-08-24

    Today, molecules can perform Boolean operations and circuits at a level of higher complexity. However, concatenation of logic gates and inhomogeneous inputs and outputs are still challenging tasks. Novel approaches for logic gate integration are possible when chemical programming and software programming are combined. Here it is shown that a molecular finite automaton based on the concatenated implication function (IMP) of a fluorescent two-component sugar probe via a wiring algorithm is able to play tic-tac-toe.

  4. Airport Landside. Volume IV. Appendix A. ALSIM AUXILIARY and MAIN Programs.

    DOT National Transportation Integrated Search

    1982-06-01

    This Appendix describes the Program Logic of the Airport Landside Simulation Model (ALSIM) AUXILIARY and MAIN Programs. Both programs are written in GPSS-V. The AUXILIARY program is operated prior to the MAIN Program to create GPSS transactions repre...

  5. Programmable logic controller performance enhancement by field programmable gate array based design.

    PubMed

    Patel, Dhruv; Bhatt, Jignesh; Trivedi, Sanjay

    2015-01-01

    PLC, the core element of modern automation systems, due to serial execution, exhibits limitations like slow speed and poor scan time. Improved PLC design using FPGA has been proposed based on parallel execution mechanism for enhancement of performance and flexibility. Modelsim as simulation platform and VHDL used to translate, integrate and implement the logic circuit in FPGA. Xilinx's Spartan kit for implementation-testing and VB has been used for GUI development. Salient merits of the design include cost-effectiveness, miniaturization, user-friendliness, simplicity, along with lower power consumption, smaller scan time and higher speed. Various functionalities and applications like typical PLC and industrial alarm annunciator have been developed and successfully tested. Results of simulation, design and implementation have been reported. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Multidimensional Simulation Applied to Water Resources Management

    NASA Astrophysics Data System (ADS)

    Camara, A. S.; Ferreira, F. C.; Loucks, D. P.; Seixas, M. J.

    1990-09-01

    A framework for an integrated decision aiding simulation (IDEAS) methodology using numerical, linguistic, and pictorial entities and operations is introduced. IDEAS relies upon traditional numerical formulations, logical rules to handle linguistic entities with linguistic values, and a set of pictorial operations. Pictorial entities are defined by their shape, size, color, and position. Pictorial operators include reproduction (copy of a pictorial entity), mutation (expansion, rotation, translation, change in color), fertile encounters (intersection, reunion), and sterile encounters (absorption). Interaction between numerical, linguistic, and pictorial entities is handled through logical rules or a simplified vector calculus operation. This approach is shown to be applicable to various environmental and water resources management analyses using a model to assess the impacts of an oil spill. Future developments, including IDEAS implementation on parallel processing machines, are also discussed.

  7. Using a logic model to evaluate the Kids Together early education inclusion program for children with disabilities and additional needs.

    PubMed

    Clapham, Kathleen; Manning, Claire; Williams, Kathryn; O'Brien, Ginger; Sutherland, Margaret

    2017-04-01

    Despite clear evidence that learning and social opportunities for children with disabilities and special needs are more effective in inclusive not segregated settings, there are few known effective inclusion programs available to children with disabilities, their families or teachers in the early years within Australia. The Kids Together program was developed to support children with disabilities/additional needs aged 0-8 years attending mainstream early learning environments. Using a key worker transdisciplinary team model, the program aligns with the individualised package approach of the National Disability Insurance Scheme (NDIS). This paper reports on the use of a logic model to underpin the process, outcomes and impact evaluation of the Kids Together program. The research team worked across 15 Early Childhood Education and Care (ECEC) centres and in home and community settings. A realist evaluation using mixed methods was undertaken to understand what works, for whom and in what contexts. The development of a logic model provided a structured way to explore how the program was implemented and achieved short, medium and long term outcomes within a complex community setting. Kids Together was shown to be a highly effective and innovative model for supporting the inclusion of children with disabilities/additional needs in a range of environments central for early childhood learning and development. The use of a logic model provided a visual representation of the Kids Together model and its component parts and enabled a theory of change to be inferred, showing how a coordinated and collaborative approached can work across multiple environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

    2014-01-07

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

  9. A Design Verification of the Parallel Pipelined Image Processings

    NASA Astrophysics Data System (ADS)

    Wasaki, Katsumi; Harai, Toshiaki

    2008-11-01

    This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.

  10. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  11. Universal Pin Electronics.

    DTIC Science & Technology

    1982-11-03

    define the maximum count for the pattern defined by the first 3 bits. Since there are 11 bits involved it is possible to define patterns up to 2048 ...applied to the UUT directly through the driver for any count up to 2048 . Any one of the 7 clocks may be selected under program control and applied to any...one ievel for the driver ( VDI ), the logic zero level for the driver (VDO), the logic one level for the receiver (VRl), and the logic zero level for the

  12. Design of a Ferroelectric Programmable Logic Gate Array

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat Duen

    2003-01-01

    A programmable logic gate array has been designed utilizing ferroelectric field effect transistors. The design has only a small number of gates, but this could be scaled up to a more useful size. Using FFET's in a logic array gives several advantages. First, it allows real-time programmability to the array to give high speed reconfiguration. It also allows the array to be configured nearly an unlimited number of times, unlike a FLASH FPGA. Finally, the Ferroelectric Programmable Logic Gate Array (FPLGA) can be implemented using a smaller number of transistors because of the inherent logic characteristics of an FFET. The device was only designed and modeled using Spice models of the circuit, including the FFET. The actual device was not produced. The design consists of a small array of NAND and NOR logic gates. Other gates could easily be produced. They are linked by FFET's that control the logic flow. Timing and logic tables have been produced showing the array can produce a variety of logic combinations at a real time usable speed. This device could be a prototype for a device that could be put into imbedded systems that need the high speed of hardware implementation of logic and the complexity to need to change the logic algorithm. Because of the non-volatile nature of the FFET, it would also be useful in situations that needed to program a logic array once and use it repeatedly after the power has been shut off.

  13. Users manual for Streamtube Curvature Analysis: Analytical method for predicting the pressure distribution about a nacelle at transonic speeds, volume 1

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Heck, P. H.

    1972-01-01

    The computer program, Streamtube Curvature Analysis, is described for the engineering user and for the programmer. The user oriented documentation includes a description of the mathematical governing equations, their use in the solution, and the method of solution. The general logical flow of the program is outlined and detailed instructions for program usage and operation are explained. General procedures for program use and the program capabilities and limitations are described. From the standpoint of the grammar, the overlay structure of the program is described. The various storage tables are defined and their uses explained. The input and output are discussed in detail. The program listing includes numerous comments so that the logical flow within the program is easily followed. A test case showing input data and output format is included as well as an error printout description.

  14. [Styles of programming 1952-1972].

    PubMed

    van den Bogaard, Adrienne

    2008-01-01

    In the field of history of computing, the construction of the early computers has received much scholarly attention. However, these machines have not only been important because of their logical design and their engineering, but also because of the programming practices that emerged around these first machines. This article compares two styles of programming that developed around Dutch 'first computers'. The first style is represented by Edsger Wybe Dijkstra (1930-2002), who would receive the Turing Award for his work in 1972. Dijkstra developed a mathematical style of programming--a program was something you should be able to design mathematically and prove it logically. The second style is represented by Willem Louis van der Poel (born 1926). For him, programming is 'trickology'. A program is primarily a technical artefact that should work: a program is something you play with, comparable to the way one solves a puzzle.

  15. Programmable computing with a single magnetoresistive element

    NASA Astrophysics Data System (ADS)

    Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.

    2003-10-01

    The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.

  16. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  17. High speed CMOS/SOS standard cell notebook

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The NASA/MSFC high speed CMOS/SOS standard cell family, designed to be compatible with the PR2D (Place, Route in 2-Dimensions) automatic layout program, is described. Standard cell data sheets show the logic diagram, the schematic, the truth table, and propagation delays for each logic cell.

  18. Online Collaboration for Programming: Assessing Students' Cognitive Abilities

    ERIC Educational Resources Information Center

    Othman, Mahfudzah; Muhd Zain, Nurzaid

    2015-01-01

    This study is primarily focused on assessing the students' logical thinking and cognitive levels in an online collaborative environment. The aim is to investigate whether the online collaboration has significant impact to the students' cognitive abilities. The assessment of the logical thinking involved the use of the online Group Assessment…

  19. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  20. Parallel Transport with Sheath and Collisional Effects in Global Electrostatic Turbulent Transport in FRCs

    NASA Astrophysics Data System (ADS)

    Bao, Jian; Lau, Calvin; Kuley, Animesh; Lin, Zhihong; Fulton, Daniel; Tajima, Toshiki; Tri Alpha Energy, Inc. Team

    2017-10-01

    Collisional and turbulent transport in a field reversed configuration (FRC) is studied in global particle simulation by using GTC (gyrokinetic toroidal code). The global FRC geometry is incorporated in GTC by using a field-aligned mesh in cylindrical coordinates, which enables global simulation coupling core and scrape-off layer (SOL) across the separatrix. Furthermore, fully kinetic ions are implemented in GTC to treat magnetic-null point in FRC core. Both global simulation coupling core and SOL regions and independent SOL region simulation have been carried out to study turbulence. In this work, the ``logical sheath boundary condition'' is implemented to study parallel transport in the SOL. This method helps to relax time and spatial steps without resolving electron plasma frequency and Debye length, which enables turbulent transports simulation with sheath effects. We will study collisional and turbulent SOL parallel transport with mirror geometry and sheath boundary condition in C2-W divertor.

  1. Architecture for distributed actuation and sensing using smart piezoelectric elements

    NASA Astrophysics Data System (ADS)

    Etienne-Cummings, Ralph; Pourboghrat, Farzad; Maruboyina, Hari K.; Abrate, Serge; Dhali, Shirshak K.

    1998-07-01

    We discuss vibration control of a cantilevered plate with multiple sensors and actuators. An architecture is chosen to minimize the number of control and sensing wires required. A custom VLSI chip, integrated with the sensor/actuator elements, controls the local behavior of the plate. All the actuators are addressed in parallel; local decode logic selects which actuator is stimulated. Downloaded binary data controls the applied voltage and modulation frequency for each actuator, and High Voltage MOSFETs are used to activate them. The sensors, which are independent adjacent piezoelectric ceramic elements, can be accessed in a random or sequential manner. An A/D card and GPIB interconnected test equipment allow a PC to read the sensors' outputs and dictate the actuation procedure. A visual programming environment is used to integrate the sensors, controller and actuators. Based on the constitutive relations for the piezoelectric material, simple models for the sensors and actuators are derived. A two level hierarchical robust controller is derived for motion control and for damping of vibrations.

  2. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen

    2018-02-01

    In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.

  3. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  4. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  5. Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2

    NASA Technical Reports Server (NTRS)

    Mann, F. I.; Horsewood, J. L.

    1978-01-01

    The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.

  6. Using a systems orientation and foundational theory to enhance theory-driven human service program evaluations.

    PubMed

    Wasserman, Deborah L

    2010-05-01

    This paper offers a framework for using a systems orientation and "foundational theory" to enhance theory-driven evaluations and logic models. The framework guides the process of identifying and explaining operative relationships and perspectives within human service program systems. Self-Determination Theory exemplifies how a foundational theory can be used to support the framework in a wide range of program evaluations. Two examples illustrate how applications of the framework have improved the evaluators' abilities to observe and explain program effect. In both exemplars improvements involved addressing and organizing into a single logic model heretofore seemingly disparate evaluation issues regarding valuing (by whose values); the role of organizational and program context; and evaluation anxiety and utilization. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  8. Logical qubit fusion

    NASA Astrophysics Data System (ADS)

    Moussa, Jonathan; Ryan-Anderson, Ciaran

    The canonical modern plan for universal quantum computation is a Clifford+T gate set implemented in a topological error-correcting code. This plan has the basic disparity that logical Clifford gates are natural for codes in two spatial dimensions while logical T gates are natural in three. Recent progress has reduced this disparity by proposing logical T gates in two dimensions with doubled, stacked, or gauge color codes, but these proposals lack an error threshold. An alternative universal gate set is Clifford+F, where a fusion (F) gate converts two logical qubits into a logical qudit. We show that logical F gates can be constructed by identifying compatible pairs of qubit and qudit codes that stabilize the same logical subspace, much like the original Bravyi-Kitaev construction of magic state distillation. The simplest example of high-distance compatible codes results in a proposal that is very similar to the stacked color code with the key improvement of retaining an error threshold. Sandia National Labs is a multi-program laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Quantum probabilistic logic programming

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  10. Using a logic model to relate the strategic to the tactical in program planning and evaluation: an illustration based on social norms interventions.

    PubMed

    Keller, Adrienne; Bauerle, Jennifer A

    2009-01-01

    Logic models are a ubiquitous tool for specifying the tactics--including implementation and evaluation--of interventions in the public health, health and social behaviors arenas. Similarly, social norms interventions are a common strategy, particularly in college settings, to address hazardous drinking and other dangerous or asocial behaviors. This paper illustrates an extension of logic models to include strategic as well as tactical components, using a specific example developed for social norms interventions. Placing the evaluation of projects within the context of this kind of logic model addresses issues related to the lack of a research design to evaluate effectiveness.

  11. Logic Design Pathology and Space Flight Electronics

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Barto, Rod L.; Erickson, K.

    1997-01-01

    Logic design errors have been observed in space flight missions and the final stages of ground test. The technologies used by designers and their design/analysis methodologies will be analyzed. This will give insight to the root causes of the failures. These technologies include discrete integrated circuit based systems, systems based on field and mask programmable logic, and the use computer aided engineering (CAE) systems. State-of-the-art (SOTA) design tools and methodologies will be analyzed with respect to high-reliability spacecraft design and potential pitfalls are discussed. Case studies of faults from large expensive programs to "smaller, faster, cheaper" missions will be used to explore the fundamental reasons for logic design problems.

  12. An interactive parallel programming environment applied in atmospheric science

    NASA Technical Reports Server (NTRS)

    vonLaszewski, G.

    1996-01-01

    This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.

  13. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.

  14. A DNAzyme-mediated logic gate for programming molecular capture and release on DNA origami.

    PubMed

    Li, Feiran; Chen, Haorong; Pan, Jing; Cha, Tae-Gon; Medintz, Igor L; Choi, Jong Hyun

    2016-06-28

    Here we design a DNA origami-based site-specific molecular capture and release platform operated by a DNAzyme-mediated logic gate process. We show the programmability and versatility of this platform with small molecules, proteins, and nanoparticles, which may also be controlled by external light signals.

  15. Rosalie Wolf Memorial Lecture: A logic model to measure the impacts of World Elder Abuse Awareness Day.

    PubMed

    Stein, Karen

    2016-01-01

    This commentary discusses the need to evaluate the impact of World Elder Abuse Awareness Day activities, the elder abuse field's most sustained public awareness initiative. A logic model is proposed with measures for short-term, medium-term, and long-term outcomes for community-based programs.

  16. Sign-And-Magnitude Up/Down Counter

    NASA Technical Reports Server (NTRS)

    Cole, Steven W.

    1991-01-01

    Magnitude-and-sign counter includes conventional up/down counter for magnitude part and special additional circuitry for sign part. Negative numbers indicated more directly. Counter implemented by programming erasable programmable logic device (EPLD) or programmable logic array (PLA). Used in place of conventional up/down counter to provide sign and magnitude values directly to other circuits.

  17. Teaching Machines to Think Fuzzy

    ERIC Educational Resources Information Center

    Technology Teacher, 2004

    2004-01-01

    Fuzzy logic programs for computers make them more human. Computers can then think through messy situations and make smart decisions. It makes computers able to control things the way people do. Fuzzy logic has been used to control subway trains, elevators, washing machines, microwave ovens, and cars. Pretty much all the human has to do is push one…

  18. An effective XML based name mapping mechanism within StoRM

    NASA Astrophysics Data System (ADS)

    Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.

    2008-07-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.

  19. Programmable bioelectronics in a stimuli-encoded 3D graphene interface

    NASA Astrophysics Data System (ADS)

    Parlak, Onur; Beyazit, Selim; Tse-Sum-Bui, Bernadette; Haupt, Karsten; Turner, Anthony P. F.; Tiwari, Ashutosh

    2016-05-01

    The ability to program and mimic the dynamic microenvironment of living organisms is a crucial step towards the engineering of advanced bioelectronics. Here, we report for the first time a design for programmable bioelectronics, with `built-in' switchable and tunable bio-catalytic performance that responds simultaneously to appropriate stimuli. The designed bio-electrodes comprise light and temperature responsive compartments, which allow the building of Boolean logic gates (i.e. ``OR'' and ``AND'') based on enzymatic communications to deliver logic operations.The ability to program and mimic the dynamic microenvironment of living organisms is a crucial step towards the engineering of advanced bioelectronics. Here, we report for the first time a design for programmable bioelectronics, with `built-in' switchable and tunable bio-catalytic performance that responds simultaneously to appropriate stimuli. The designed bio-electrodes comprise light and temperature responsive compartments, which allow the building of Boolean logic gates (i.e. ``OR'' and ``AND'') based on enzymatic communications to deliver logic operations. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr02355j

  20. DNA-programmed dynamic assembly of quantum dots for molecular computation.

    PubMed

    He, Xuewen; Li, Zhi; Chen, Muzi; Ma, Nan

    2014-12-22

    Despite the widespread use of quantum dots (QDs) for biosensing and bioimaging, QD-based bio-interfaceable and reconfigurable molecular computing systems have not yet been realized. DNA-programmed dynamic assembly of multi-color QDs is presented for the construction of a new class of fluorescence resonance energy transfer (FRET)-based QD computing systems. A complete set of seven elementary logic gates (OR, AND, NOR, NAND, INH, XOR, XNOR) are realized using a series of binary and ternary QD complexes operated by strand displacement reactions. The integration of different logic gates into a half-adder circuit for molecular computation is also demonstrated. This strategy is quite versatile and straightforward for logical operations and would pave the way for QD-biocomputing-based intelligent molecular diagnostics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  2. Knowledge discovery from structured mammography reports using inductive logic programming.

    PubMed

    Burnside, Elizabeth S; Davis, Jesse; Costa, Victor Santos; Dutra, Inês de Castro; Kahn, Charles E; Fine, Jason; Page, David

    2005-01-01

    The development of large mammography databases provides an opportunity for knowledge discovery and data mining techniques to recognize patterns not previously appreciated. Using a database from a breast imaging practice containing patient risk factors, imaging findings, and biopsy results, we tested whether inductive logic programming (ILP) could discover interesting hypotheses that could subsequently be tested and validated. The ILP algorithm discovered two hypotheses from the data that were 1) judged as interesting by a subspecialty trained mammographer and 2) validated by analysis of the data itself.

  3. Data Quality Objectives Supporting the Environmental Soil Monitoring Program for the Idaho National Laboratory Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haney, Thomas Jay

    This document describes the process used to develop data quality objectives for the Idaho National Laboratory (INL) Environmental Soil Monitoring Program in accordance with U.S. Environmental Protection Agency guidance. This document also develops and presents the logic that was used to determine the specific number of soil monitoring locations at the INL Site, at locations bordering the INL Site, and at locations in the surrounding regional area. The monitoring location logic follows the guidance from the U.S. Department of Energy for environmental surveillance of its facilities.

  4. Development of multiple user AMTRAN on the Datacraft DC6024

    NASA Technical Reports Server (NTRS)

    Austin, S. L.

    1973-01-01

    A multiple user version of AMTRAn was implemented on the Datacraft DC6024 computer is reported. The major portion of the multiple user logic is incorporated in the main program which remains in core during all AMTRAN processes. A detailed flowchart of the main program is provided as documentation of the multiple user capability. Activities are directed toward perfecting its capability, providing new features in response to user needs and requests, providing a two-dimensional array AMTRAN containing multiple user logic, and providing documentation as the tasks progress.

  5. NASA Lewis F100 engine testing

    NASA Technical Reports Server (NTRS)

    Werner, R. A.; Willoh, R. G., Jr.; Abdelwahab, M.

    1984-01-01

    Two builds of an F100 engine model derivative (EMD) engine were evaluated for improvements in engine components and digital electronic engine control (DEEC) logic. Two DEEC flight logics were verified throughout the flight envelope in support of flight clearance for the F100 engine model derivative program (EMPD). A nozzle instability and a faster augmentor transient capability was investigated in support of the F-15 DEEC flight program. Off schedule coupled system mode fan flutter, DEEC nose-boom pressure correlation, DEEC station six pressure comparison, and a new fan inlet variable vane (CIVV) schedule are identified.

  6. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    NASA Technical Reports Server (NTRS)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  7. The Construction of Impossibility: A Logic-Based Analysis of Conjuring Tricks

    PubMed Central

    Smith, Wally; Dignum, Frank; Sonenberg, Liz

    2016-01-01

    Psychologists and cognitive scientists have long drawn insights and evidence from stage magic about human perceptual and attentional errors. We present a complementary analysis of conjuring tricks that seeks to understand the experience of impossibility that they produce. Our account is first motivated by insights about the constructional aspects of conjuring drawn from magicians' instructional texts. A view is then presented of the logical nature of impossibility as an unresolvable contradiction between a perception-supported belief about a situation and a memory-supported expectation. We argue that this condition of impossibility is constructed not simply through misperceptions and misattentions, but rather it is an outcome of a trick's whole structure of events. This structure is conceptualized as two parallel event sequences: an effect sequence that the spectator is intended to believe; and a method sequence that the magician understands as happening. We illustrate the value of this approach through an analysis of a simple close-up trick, Martin Gardner's Turnabout. A formalism called propositional dynamic logic is used to describe some of its logical aspects. This elucidates the nature and importance of the relationship between a trick's effect sequence and its method sequence, characterized by the careful arrangement of four evidence relationships: similarity, perceptual equivalence, structural equivalence, and congruence. The analysis further identifies two characteristics of magical apparatus that enable the construction of apparent impossibility: substitutable elements and stable occlusion. PMID:27378959

  8. POLE.VAULT: A Semantic Framework for Health Policy Evaluation and Logical Testing.

    PubMed

    Shaban-Nejad, Arash; Okhmatovskaia, Anya; Shin, Eun Kyong; Davis, Robert L; Buckeridge, David L

    2017-01-01

    The major goal of our study is to provide an automatic evaluation framework that aligns the results generated through semantic reasoning with the best available evidence regarding effective interventions to support the logical evaluation of public health policies. To this end, we have designed the POLicy EVAlUation & Logical Testing (POLE.VAULT) Framework to assist different stakeholders and decision-makers in making informed decisions about different health-related interventions, programs and ultimately policies, based on the contextual knowledge and the best available evidence at both individual and aggregate levels.

  9. Parallel Quantum Circuit in a Tunnel Junction

    NASA Astrophysics Data System (ADS)

    Faizy Namarvar, Omid; Dridi, Ghassen; Joachim, Christian; GNS theory Group Team

    In between 2 metallic nanopads, adding identical and independent electron transfer paths in parallel increases the electronic effective coupling between the 2 nanopads through the quantum circuit defined by those paths. Measuring this increase of effective coupling using the tunnelling current intensity can lead for example for 2 paths in parallel to the now standard G =G1 +G2 + 2√{G1 .G2 } conductance superposition law (1). This is only valid for the tunnelling regime (2). For large electronic coupling to the nanopads (or at resonance), G can saturate and even decay as a function of the number of parallel paths added in the quantum circuit (3). We provide here the explanation of this phenomenon: the measurement of the effective Rabi oscillation frequency using the current intensity is constrained by the normalization principle of quantum mechanics. This limits the quantum conductance G for example to go when there is only one channel per metallic nanopads. This ef fect has important consequences for the design of Boolean logic gates at the atomic scale using atomic scale or intramolecular circuits. References: This has the financial support by European PAMS project.

  10. Evaluation and Strategic Planning for the GLOBE Program

    NASA Astrophysics Data System (ADS)

    Geary, E. E.; Williams, V. L.

    2010-12-01

    The Global Learning and Observations to Benefit the Environment (GLOBE) Program is an international environmental education program. It unites educators, students and scientists worldwide to collaborate on inquiry based investigations of the environment and Earth system science. Evaluation of the GLOBE program has been challenging because of its broad reach, diffuse models of implementation, and multiple stakeholders. In an effort to guide current evaluation efforts, a logic model was developed that provides a visual display of how the GLOBE program operates. Using standard elements of inputs, activities, outputs, customers and outcomes, this model describes how the program operates to achieve its goals. The template used to develop this particular logic model aligns the GLOBE program operations with its program strategy, thus ensuring that what the program is doing supports the achievement of long-term, intermediate and annual goals. It also provides a foundation for the development of key programmatic metrics that can be used to gauge progress toward the achievement of strategic goals.

  11. Human Memory Organization for Computer Programs.

    ERIC Educational Resources Information Center

    Norcio, A. F.; Kerst, Stephen M.

    1983-01-01

    Results of study investigating human memory organization in processing of computer programming languages indicate that algorithmic logic segments form a cognitive organizational structure in memory for programs. Statement indentation and internal program documentation did not enhance organizational process of recall of statements in five Fortran…

  12. Index to Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Lekan, Helen A., Ed.

    The computer assisted instruction (CAI) programs and projects described in this index are listed by subject matter. The index gives the program name, author, source, description, prerequisites, level of instruction, type of student, average completion time, logic and program, purpose for which program was designed, supplementary…

  13. QUARTERLY TECHNICAL PROGRESS REPORT, JULY, AUGUST, SEPTEMBER 1966.

    DTIC Science & Technology

    Contents: Circuit research program; Hardware systems research; Software systems research program; Numerical methods, computer arithmetic and...artificial languages; Library automation; Illiac II service , use, and program development; IBM service , use, and program development; Problem specifications; Switching theory and logical design; General laboratory information.

  14. Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores

    NASA Astrophysics Data System (ADS)

    Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei

    We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.

  15. Effects of Combined Physical and Cognitive Exercises on Cognition and Mobility in Patients With Mild Cognitive Impairment: A Randomized Clinical Trial.

    PubMed

    Shimada, Hiroyuki; Makizako, Hyuma; Doi, Takehiko; Park, Hyuntae; Tsutsumimoto, Kota; Verghese, Joe; Suzuki, Takao

    2017-11-17

    Although participation in physical and cognitive activities is encouraged to reduce the risk of dementia, the preventive efficacy of these activities for patients with mild cognitive impairment is unestablished. To compare the cognitive and mobility effects of a 40-week program of combined cognitive and physical activity with those of a health education program. A randomized, parallel, single-blind controlled trial. A population-based study of participants recruited from Obu, a residential suburb of Nagoya, Japan. Between August 2011 and February 2012, we evaluated 945 adults 65 years or older with mild cognitive impairment, enrolled 308, and randomly assigned them to the combined activity group (n = 154) or the health education control group (n = 154). The combined activity program involved weekly 90-minute sessions for 40 weeks focused on physical and cognitive activities. The control group attended 90-minute health promotion classes thrice during the 40-week trial period. The outcome measures were assessed at the study's beginning and end by personnel blinded to mild cognitive impairment subtype and group. The primary endpoints were postintervention changes in scores on (1) the Mini-Mental State Examination as a measure of general cognitive status and memory, (2) the Wechsler Memory Scale-Revised-Logical Memory II, and (3) the Rey Auditory Verbal Learning Test. We applied mobility assessments and assessed brain atrophy with magnetic resonance imaging. Compared with the control group, the combined activity group showed significantly greater scores on the Mini-Mental State Examination (difference = 0.8 points, P = .012) and Wechsler Memory Scale-Revised-Logical Memory II (difference = 1.0, P = .004), significant improvements in mobility and the nonmemory domains and reduced left medial temporal lobe atrophy in amnestic mild cognitive impairment (Z-score difference = -31.3, P < .05). Combined physical and cognitive activity improves or maintains cognitive and physical performance in older adults with mild cognitive impairment, especially the amnestic type. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  16. An object-oriented approach to nested data parallelism

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.; Chatterjee, Siddhartha

    1994-01-01

    This paper describes an implementation technique for integrating nested data parallelism into an object-oriented language. Data-parallel programming employs sets of data called 'collections' and expresses parallelism as operations performed over the elements of a collection. When the elements of a collection are also collections, then there is the possibility for 'nested data parallelism.' Few current programming languages support nested data parallelism however. In an object-oriented framework, a collection is a single object. Its type defines the parallel operations that may be applied to it. Our goal is to design and build an object-oriented data-parallel programming environment supporting nested data parallelism. Our initial approach is built upon three fundamental additions to C++. We add new parallel base types by implementing them as classes, and add a new parallel collection type called a 'vector' that is implemented as a template. Only one new language feature is introduced: the 'foreach' construct, which is the basis for exploiting elementwise parallelism over collections. The strength of the method lies in the compilation strategy, which translates nested data-parallel C++ into ordinary C++. Extracting the potential parallelism in nested 'foreach' constructs is called 'flattening' nested parallelism. We show how to flatten 'foreach' constructs using a simple program transformation. Our prototype system produces vector code which has been successfully run on workstations, a CM-2, and a CM-5.

  17. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  18. The BLAZE language: A parallel language for scientific programming

    NASA Technical Reports Server (NTRS)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  19. Relevance of Piagetian cross-cultural psychology to the humanities and social sciences.

    PubMed

    Oesterdiekhoff, Georg W

    2013-01-01

    Jean Piaget held views according to which there are parallels between ontogeny and the historical development of culture, sciences, and reason. His books are full of remarks and considerations about these parallels, with reference to many logical, physical, social, and moral phenomena.This article explains that Piagetian cross-cultural psychology has delivered the decisive data needed to extend the research interests of Piaget. These data provide a basis for reconstructing not only the history of sciences but also the history of religion, politics, morals, culture, philosophy, and social change and the emergence of industrial society. Thus, it is possible to develop Piagetian theory as a historical anthropology in order to provide a basis for the humanities and social sciences.

  20. A low-complexity Reed-Solomon decoder using new key equation solver

    NASA Astrophysics Data System (ADS)

    Xie, Jun; Yuan, Songxin; Tu, Xiaodong; Zhang, Chongfu

    2006-09-01

    This paper presents a low-complexity parallel Reed-Solomon (RS) (255,239) decoder architecture using a novel pipelined variable stages recursive Modified Euclidean (ME) algorithm for optical communication. The pipelined four-parallel syndrome generator is proposed. The time multiplexing and resource sharing schemes are used in the novel recursive ME algorithm to reduce the logic gate count. The new key equation solver can be shared by two decoder macro. A new Chien search cell which doesn't need initialization is proposed in the paper. The proposed decoder can be used for 2.5Gb/s data rates device. The decoder is implemented in Altera' Stratixll device. The resource utilization is reduced about 40% comparing to the conventional method.

  1. Method and system for selecting data sampling phase for self timed interface logic

    DOEpatents

    Hoke, Joseph Michael; Ferraiolo, Frank D.; Lo, Tin-Chee; Yarolin, John Michael

    2005-01-04

    An exemplary embodiment of the present invention is a method for transmitting data among processors over a plurality of parallel data lines and a clock signal line. A receiver processor receives both data and a clock signal from a sender processor. At the receiver processor a bit of the data is phased aligned with the transmitted clock signal. The phase aligning includes selecting a data phase from a plurality of data phases in a delay chain and then adjusting the selected data phase to compensate for a round-off error. Additional embodiments include a system and storage medium for transmitting data among processors over a plurality of parallel data lines and a clock signal line.

  2. Gray scale operation of a multichannel optical convolver using the Semetex magnetooptic spatial light modulator

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey A.; Day, Timothy; Lilly, Roger A.; Taber, Donald B.; Liu, Hua-Kuang

    1988-01-01

    A new multichannel optical correlator/convolver architecture which uses an acoustooptic light modulator for the input channel and a Semetex magnetooptic spatial light modulator (MOSLM) for the set of parallel reference channels is presented. Details of the anamorphic optical system are discussed. Experimental results illustrate the use of the system as a convolver for performing digital multiplication by analog convolution (DMAC). A limited gray scale capability for data stored by the MOSLM is demonstrated by implementing this DMAC algorithm with trinary logic. Use of the MOSLM allows the number of parallel channels for the convolver to be increased significantly compared with previously reported techniques while retaining the capability for updating both channels at high speeds.

  3. Gray Scale Operation Of A Multichannel Optical Convolver Using The Semetex Magnetooptic Spatial Light Modulator

    NASA Astrophysics Data System (ADS)

    Davis, Jeffrey A.; Day, Timothy; Lilly, Roger A.; Taber, Donald B.; Liu, Hua-Kuang; Davis, J. A.; Day, T.; Lilly, R. A.; Taber, D. B.; Liu, H.-K.

    1988-02-01

    We present a new multichannel optical correlator/convolver architecture which uses an acoustooptic light modulator (AOLM) for the input channel and a Semetex magnetooptic spatial light modulator (MOSLM) for the set of parallel reference channels. Details of the anamorphic optical system are discussed. Experimental results illustrate use of the system as a convolver for performing digital multiplication by analog convolution (DMAC). A limited gray scale capability for data stored by the MOSLM is demonstrated by implementing this DMAC algorithm with trinary logic. Use of the MOSLM allows the number of parallel channels for the convolver to be increased significantly compared with previously reported techniques while retaining the capability for updating both channels at high speeds.

  4. Gray scale operation of a multichannel optical convolver using the Semetex magnetooptic spatial light modulator

    NASA Astrophysics Data System (ADS)

    Davis, Jeffrey A.; Day, Timothy; Lilly, Roger A.; Taber, Donald B.; Liu, Hua-Kuang

    A new multichannel optical correlator/convolver architecture which uses an acoustooptic light modulator for the input channel and a Semetex magnetooptic spatial light modulator (MOSLM) for the set of parallel reference channels is presented. Details of the anamorphic optical system are discussed. Experimental results illustrate the use of the system as a convolver for performing digital multiplication by analog convolution (DMAC). A limited gray scale capability for data stored by the MOSLM is demonstrated by implementing this DMAC algorithm with trinary logic. Use of the MOSLM allows the number of parallel channels for the convolver to be increased significantly compared with previously reported techniques while retaining the capability for updating both channels at high speeds.

  5. DNAzyme-Based Logic Gate-Mediated DNA Self-Assembly.

    PubMed

    Zhang, Cheng; Yang, Jing; Jiang, Shuoxing; Liu, Yan; Yan, Hao

    2016-01-13

    Controlling DNA self-assembly processes using rationally designed logic gates is a major goal of DNA-based nanotechnology and programming. Such controls could facilitate the hierarchical engineering of complex nanopatterns responding to various molecular triggers or inputs. Here, we demonstrate the use of a series of DNAzyme-based logic gates to control DNA tile self-assembly onto a prescribed DNA origami frame. Logic systems such as "YES," "OR," "AND," and "logic switch" are implemented based on DNAzyme-mediated tile recognition with the DNA origami frame. DNAzyme is designed to play two roles: (1) as an intermediate messenger to motivate downstream reactions and (2) as a final trigger to report fluorescent signals, enabling information relay between the DNA origami-framed tile assembly and fluorescent signaling. The results of this study demonstrate the plausibility of DNAzyme-mediated hierarchical self-assembly and provide new tools for generating dynamic and responsive self-assembly systems.

  6. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  7. Evaluating community and campus environmental public health programs.

    PubMed

    Pettibone, Kristianna G; Parras, Juan; Croisant, Sharon Petronella; Drew, Christina H

    2014-01-01

    The National Institute of Environmental Health Sciences' (NIEHS) Partnerships for Environmental Public Health (PEPH) program created the Evaluation Metrics Manual as a tool to help grantees understand how to map out their programs using a logic model, and to identify measures for documenting their achievements in environmental public health research. This article provides an overview of the manual, describing how grantees and community partners contributed to the manual, and how the basic components of a logic model can be used to identify metrics. We illustrate how the approach can be implemented, using a real-world case study from the University of Texas Medical Branch, where researchers worked with community partners to develop a network to address environmental justice issues.

  8. SEE Sensitivity Analysis of 180 nm NAND CMOS Logic Cell for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focus on Single Event Effects caused by energetic particle strike on sensitive locations in CMOS NAND logic cell designed in 180nm technology node to be operated in space radiation environment. The generation of SE transients as well as upsets as function of LET of incident particle has been determined for logic devices onboard LEO and GEO satellites. The minimum magnitude pulse and pulse-width for threshold LET was determined to estimate the vulnerability /susceptibility of device for heavy ion strike. The impact of temperature, strike location and logic state of NAND circuit on total SEU/SET rate was estimated with physical mechanism simulations using Visual TCAD, Genius, runSEU program and Crad computer codes.

  9. Constructing and Verifying Program Theory Using Source Documentation

    ERIC Educational Resources Information Center

    Renger, Ralph

    2010-01-01

    Making the program theory explicit is an essential first step in Theory Driven Evaluation (TDE). Once explicit, the program logic can be established making necessary links between the program theory, activities, and outcomes. Despite its importance evaluators often encounter situations where the program theory is not explicitly stated. Under such…

  10. Logic Modeling as a Tool to Prepare to Evaluate Disaster and Emergency Preparedness, Response, and Recovery in Schools

    ERIC Educational Resources Information Center

    Zantal-Wiener, Kathy; Horwood, Thomas J.

    2010-01-01

    The authors propose a comprehensive evaluation framework to prepare for evaluating school emergency management programs. This framework involves a logic model that incorporates Government Performance and Results Act (GPRA) measures as a foundation for comprehensive evaluation that complements performance monitoring used by the U.S. Department of…

  11. Programme Costing - A Logical Step Toward Improved Management.

    ERIC Educational Resources Information Center

    McDougall, Ronald N.

    The analysis of costs of university activities from a functional or program point of view, rather than an organizational unit basis, is not only an imperative for the planning and management of universities, but also a logical method of examing the costs of university operations. A task force of the Committee of Finance Officers-Universities of…

  12. A Project-Based Learning Approach to Programmable Logic Design and Computer Architecture

    ERIC Educational Resources Information Center

    Kellett, C. M.

    2012-01-01

    This paper describes a course in programmable logic design and computer architecture as it is taught at the University of Newcastle, Australia. The course is designed around a major design project and has two supplemental assessment tasks that are also described. The context of the Computer Engineering degree program within which the course is…

  13. Teaching Semantic Tableaux Method for Propositional Classical Logic with a CAS

    ERIC Educational Resources Information Center

    Aguilera-Venegas, Gabriel; Galán-García, José Luis; Galán-García, María Ángeles; Rodríguez-Cielos, Pedro

    2015-01-01

    Automated theorem proving (ATP) for Propositional Classical Logic is an algorithm to check the validity of a formula. It is a very well-known problem which is decidable but co-NP-complete. There are many algorithms for this problem. In this paper, an educationally oriented implementation of Semantic Tableaux method is described. The program has…

  14. Submicron Systems Architecture Project

    DTIC Science & Technology

    1981-11-01

    This project is concerned with the architecture , design , and testing of VLSI Systems. The principal activities in this report period include: The Tree Machine; COPE, The Homogeneous Machine; Computational Arrays; Switch-Level Model for MOS Logic Design; Testing; Local Network and Designer Workstations; Self-timed Systems; Characterization of Deadlock Free Resource Contention; Concurrency Algebra; Language Design and Logic for Program Verification.

  15. "Modeling" Youth Work: Logic Models, Neoliberalism, and Community Praxis

    ERIC Educational Resources Information Center

    Carpenter, Sara

    2016-01-01

    This paper examines the use of logic models in the development of community initiatives within the AmeriCorps program. AmeriCorps is the civilian national service programme in the U.S., operating as a grants programme to local governments and not-for-profit organisations and providing low-cost labour to address pressing issues of social…

  16. [New horizons in medicine. The application of "fuzzy logic" in clinical and experimental medicine].

    PubMed

    Guarini, G

    1994-06-01

    In medicine, the study of physiological and physiopathological problems is generally programmed by elaborating models which respond to the principals of formal logic. This gives the advantage of favouring the transformation of the formal model into a mathematical model of reference which responds to the principles of the set theories. All this is in the utopian wish to obtain as a result of each research, a net answer whether positive or negative, according to the Aristotelian principal of tertium non datur. Taking this into consideration, the A. briefly traces the principles of modal logic and, in particular, those of fuzzy logic, proposing that the latter substitute the actual definition of "logic with more truth values", with that perhaps more pertinent of "logic of conditioned possibilities". After a brief synthesis on the state of the art on the application of fuzzy logic, the A. reports an example of graphic expression of fuzzy logic by demonstrating how the basic glycemic data (expressed by the vectors magnitude) revealed in a sample of healthy individuals, constituted on the whole an unbroken continuous stream of set partials. The A. calls attention to fuzzy logic as a useful instrument to elaborate in a new way the analysis of scenario qualified to acquire the necessary information to single out the critical points which characterize the potential development of any biological phenomenon.

  17. Data Processing: Fifteen Suggestions for Computer Training in Your Business Education Classes.

    ERIC Educational Resources Information Center

    Barr, Lowell L.

    1980-01-01

    Presents 15 suggestions for training business education students in the use of computers. Suggestions involve computer language, method of presentation, laboratory time, programing assignments, instructions and handouts, problem solving, deadlines, reviews, programming concepts, programming logic, documentation, and defensive programming. (CT)

  18. Program to Optimize Simulated Trajectories (POST). Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to the programmer and relating to the program to optimize simulated trajectories (POST) is presented. Topics discussed include: program structure and logic, subroutine listings and flow charts, and internal FORTRAN symbols. The POST core requirements are summarized along with program macrologic.

  19. Space shuttle atmospheric revitalization subsystem/active thermal control subsystem computer program (users manual)

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A shuttle (ARS) atmosphere revitalization subsystem active thermal control subsystem (ATCS) performance routine was developed. This computer program is adapted from the Shuttle EC/LSS Design Computer Program. The program was upgraded in three noteworthy areas: (1) The functional ARS/ATCS schematic has been revised to accurately synthesize the shuttle baseline system definition. (2) The program logic has been improved to provide a more accurate prediction of the integrated ARS/ATCS system performance. Additionally, the logic has been expanded to model all components and thermal loads in the ARS/ATCS system. (3) The program is designed to be used on the NASA JSC crew system division's programmable calculator system. As written the new computer routine has an average running time of five minutes. The use of desk top type calculation equipment, and the rapid response of the program provides the NASA with an analytical tool for trade studies to refine the system definition, and for test support of the RSECS or integrated Shuttle ARS/ATCS test programs.

  20. Collectively loading programs in a multiple program multiple data environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the programmore » needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.« less

  1. Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming.

    PubMed

    Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio

    2013-09-15

    Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary materials are available at Bioinformatics online. santiago.videla@irisa.fr.

  2. Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming

    PubMed Central

    Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio

    2013-01-01

    Motivation: Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. Results: We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input–output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. Availability: caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary information: Supplementary materials are available at Bioinformatics online. Contact: santiago.videla@irisa.fr PMID:23853063

  3. A discriminative method for family-based protein remote homology detection that combines inductive logic programming and propositional models

    PubMed Central

    2011-01-01

    Background Remote homology detection is a hard computational problem. Most approaches have trained computational models by using either full protein sequences or multiple sequence alignments (MSA), including all positions. However, when we deal with proteins in the "twilight zone" we can observe that only some segments of sequences (motifs) are conserved. We introduce a novel logical representation that allows us to represent physico-chemical properties of sequences, conserved amino acid positions and conserved physico-chemical positions in the MSA. From this, Inductive Logic Programming (ILP) finds the most frequent patterns (motifs) and uses them to train propositional models, such as decision trees and support vector machines (SVM). Results We use the SCOP database to perform our experiments by evaluating protein recognition within the same superfamily. Our results show that our methodology when using SVM performs significantly better than some of the state of the art methods, and comparable to other. However, our method provides a comprehensible set of logical rules that can help to understand what determines a protein function. Conclusions The strategy of selecting only the most frequent patterns is effective for the remote homology detection. This is possible through a suitable first-order logical representation of homologous properties, and through a set of frequent patterns, found by an ILP system, that summarizes essential features of protein functions. PMID:21429187

  4. A discriminative method for family-based protein remote homology detection that combines inductive logic programming and propositional models.

    PubMed

    Bernardes, Juliana S; Carbone, Alessandra; Zaverucha, Gerson

    2011-03-23

    Remote homology detection is a hard computational problem. Most approaches have trained computational models by using either full protein sequences or multiple sequence alignments (MSA), including all positions. However, when we deal with proteins in the "twilight zone" we can observe that only some segments of sequences (motifs) are conserved. We introduce a novel logical representation that allows us to represent physico-chemical properties of sequences, conserved amino acid positions and conserved physico-chemical positions in the MSA. From this, Inductive Logic Programming (ILP) finds the most frequent patterns (motifs) and uses them to train propositional models, such as decision trees and support vector machines (SVM). We use the SCOP database to perform our experiments by evaluating protein recognition within the same superfamily. Our results show that our methodology when using SVM performs significantly better than some of the state of the art methods, and comparable to other. However, our method provides a comprehensible set of logical rules that can help to understand what determines a protein function. The strategy of selecting only the most frequent patterns is effective for the remote homology detection. This is possible through a suitable first-order logical representation of homologous properties, and through a set of frequent patterns, found by an ILP system, that summarizes essential features of protein functions.

  5. The BLAZE language - A parallel language for scientific programming

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  6. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Satisfiability of logic programming based on radial basis function neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged

    2014-07-10

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less

  8. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  9. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    NASA Astrophysics Data System (ADS)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  10. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  11. IOPA: I/O-aware parallelism adaption for parallel programs.

    PubMed

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  12. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  13. Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sczyrba, Alex; Pratap, Abhishek; Canon, Shane

    2011-03-22

    Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less

  14. TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, S. F.

    1994-01-01

    The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  15. Programmer's guide for the GNAT computer program (numerical analysis of stratification in supercritical oxygen)

    NASA Technical Reports Server (NTRS)

    Heinmiller, J. P.

    1971-01-01

    This document is the programmer's guide for the GNAT computer program developed under MSC/TRW Task 705-2, Apollo cryogenic storage system analysis, subtask 2, is reported. Detailed logic flow charts and compiled program listings are provided for all program elements.

  16. INTERDEPENDENT SUPERIORITY AND INFERIORITY FEELINGS

    PubMed Central

    Ingham, Harrington V.

    1949-01-01

    It is postulated that in neurotic persons who have unrealistic feelings of superiority and inferiority the two are interdependent. This is a departure from the concept of previous observers that either one or the other is primary and its opposite is overcompensation. The author postulates considerable parallelism, with equal importance for each. He submits that the neurotic person forms two logic-resistant compartments for the two opposed self-estimates and that treatment which makes inroads of logic upon one compartment, simultaneously does so upon the other. Two examples are briefly reported. The neurotic benefits sought in exaggeration of capability are the same as those sought in insistence upon inferiority: Presumption of superiority at once bids for approbation and delivers the subject from the need to prove himself worthy of it in dreaded competition; exaggeration of incapability baits sympathy and makes competition unnecessary because failure is conceded. Some of the characteristics of abnormal self-estimates that distinguish them from normal are: Preoccupation with self, resistance to logical explanation of personality problems, inconsistency in reasons for beliefs in adequacy on the one hand and inadequacy on the other, unreality, rationalization of faults, and difficulty and vacillation in the selection of adequate goals. PMID:15390573

  17. Theory! The Missing Link in Understanding the Performance of Neonate/Infant Home-Visiting Programs to Prevent Child Maltreatment: A Systematic Review

    PubMed Central

    Segal, Leonie; Sara Opie, Rachelle; Dalziel, Kim

    2012-01-01

    Context Home-visiting programs have been offered for more than sixty years to at-risk families of newborns and infants. But despite decades of experience with program delivery, more than sixty published controlled trials, and more than thirty published literature reviews, there is still uncertainty surrounding the performance of these programs. Our particular interest was the performance of home visiting in reducing child maltreatment. Methods We developed a program logic framework to assist in understanding the neonate/infant home-visiting literature, identified through a systematic literature review. We tested whether success could be explained by the logic model using descriptive synthesis and statistical analysis. Findings Having a stated objective of reducing child maltreatment—a theory or mechanism of change underpinning the home-visiting program consistent with the target population and their needs and program components that can deliver against the nominated theory of change—considerably increased the chance of success. We found that only seven of fifty-three programs demonstrated such consistency, all of which had a statistically significant positive outcome, whereas of the fifteen that had no match, none was successful. Programs with a partial match had an intermediate success rate. The relationship between program success and full, partial or no match was statistically significant. Conclusions Employing a theory-driven approach provides a new way of understanding the disparate performance of neonate/infant home-visiting programs. Employing a similar theory-driven approach could also prove useful in the review of other programs that embody a diverse set of characteristics and may apply to diverse populations and settings. A program logic framework provides a rigorous approach to deriving policy-relevant meaning from effectiveness evidence of complex programs. For neonate/infant home-visiting programs, it means that in developing these programs, attention to consistency of objectives, theory of change, target population, and program components is critical. PMID:22428693

  18. Theory! The missing link in understanding the performance of neonate/infant home-visiting programs to prevent child maltreatment: a systematic review.

    PubMed

    Segal, Leonie; Sara Opie, Rachelle; Dalziel, Kim

    2012-03-01

    Home-visiting programs have been offered for more than sixty years to at-risk families of newborns and infants. But despite decades of experience with program delivery, more than sixty published controlled trials, and more than thirty published literature reviews, there is still uncertainty surrounding the performance of these programs. Our particular interest was the performance of home visiting in reducing child maltreatment. We developed a program logic framework to assist in understanding the neonate/infant home-visiting literature, identified through a systematic literature review. We tested whether success could be explained by the logic model using descriptive synthesis and statistical analysis. Having a stated objective of reducing child maltreatment-a theory or mechanism of change underpinning the home-visiting program consistent with the target population and their needs and program components that can deliver against the nominated theory of change-considerably increased the chance of success. We found that only seven of fifty-three programs demonstrated such consistency, all of which had a statistically significant positive outcome, whereas of the fifteen that had no match, none was successful. Programs with a partial match had an intermediate success rate. The relationship between program success and full, partial or no match was statistically significant. Employing a theory-driven approach provides a new way of understanding the disparate performance of neonate/infant home-visiting programs. Employing a similar theory-driven approach could also prove useful in the review of other programs that embody a diverse set of characteristics and may apply to diverse populations and settings. A program logic framework provides a rigorous approach to deriving policy-relevant meaning from effectiveness evidence of complex programs. For neonate/infant home-visiting programs, it means that in developing these programs, attention to consistency of objectives, theory of change, target population, and program components is critical. © 2012 Milbank Memorial Fund.

  19. Engaging partners to initiate evaluation efforts: tactics used and lessons learned from the prevention research centers program.

    PubMed

    Wright, Demia Sundra; Anderson, Lynda A; Brownson, Ross C; Gwaltney, Margaret K; Scherer, Jennifer; Cross, Alan W; Goodman, Robert M; Schwartz, Randy; Sims, Tom; White, Carol R

    2008-01-01

    The Centers for Disease Control and Prevention's (CDC's) Prevention Research Centers (PRC) Program underwent a 2-year evaluation planning project using a participatory process that allowed perspectives from the national community of PRC partners to be expressed and reflected in a national logic model. The PRC Program recognized the challenge in developing a feasible, useable, and relevant evaluation process for a large, diverse program. To address the challenge, participatory and utilization-focused evaluation models were used. Four tactics guided the evaluation planning process: 1) assessing stakeholders' communication needs and existing communication mechanisms and infrastructure; 2) using existing mechanisms and establishing others as needed to inform, educate, and request feedback; 3) listening to and using feedback received; and 4) obtaining adequate resources and building flexibility into the project plan to support multifaceted mechanisms for data collection. Participatory methods resulted in buy-in from stakeholders and the development of a national logic model. Benefits included CDC's use of the logic model for program planning and development of a national evaluation protocol and increased expectations among PRC partners for involvement. Challenges included the time, effort, and investment of program resources required for the participatory approach and the identification of whom to engage and when to engage them for feedback on project decisions. By using a participatory and utilization-focused model, program partners positively influenced how CDC developed an evaluation plan. The tactics we used can guide the involvement of program stakeholders and help with decisions on appropriate methods and approaches for engaging partners.

  20. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  1. Improvements to the adaptive maneuvering logic program

    NASA Technical Reports Server (NTRS)

    Burgin, George H.

    1986-01-01

    The Adaptive Maneuvering Logic (AML) computer program simulates close-in, one-on-one air-to-air combat between two fighter aircraft. Three important improvements are described. First, the previously available versions of AML were examined for their suitability as a baseline program. The selected program was then revised to eliminate some programming bugs which were uncovered over the years. A listing of this baseline program is included. Second, the equations governing the motion of the aircraft were completely revised. This resulted in a model with substantially higher fidelity than the original equations of motion provided. It also completely eliminated the over-the-top problem, which occurred in the older versions when the AML-driven aircraft attempted a vertical or near vertical loop. Third, the requirements for a versatile generic, yet realistic, aircraft model were studied and implemented in the program. The report contains detailed tables which make the generic aircraft to be either a modern, high performance aircraft, an older high performance aircraft, or a previous generation jet fighter.

  2. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  3. Quantitative structure-activity relationships by neural networks and inductive logic programming. II. The inhibition of dihydrofolate reductase by triazines

    NASA Astrophysics Data System (ADS)

    Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.

    1994-08-01

    One of the largest available data sets for developing a quantitative structure-activity relationship (QSAR) — the inhibition of dihydrofolate reductase (DHFR) by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazine derivatives — has been used for a sixfold cross-validation trial of neural networks, inductive logic programming (ILP) and linear regression. No statistically significant difference was found between the predictive capabilities of the methods. However, the representation of molecules by attributes, which is integral to the ILP approach, provides understandable rules about drug-receptor interactions.

  4. d-Neighborhood system and generalized F-contraction in dislocated metric space.

    PubMed

    Kumari, P Sumati; Zoto, Kastriot; Panthi, Dinesh

    2015-01-01

    This paper, gives an answer for the Question 1.1 posed by Hitzler (Generalized metrics and topology in logic programming semantics, 2001) by means of "Topological aspects of d-metric space with d-neighborhood system". We have investigated the topological aspects of a d-neighborhood system obtained from dislocated metric space (simply d-metric space) which has got useful applications in the semantic analysis of logic programming. Further more we have generalized the notion of F-contraction in the view of d-metric spaces and investigated the uniqueness of fixed point and coincidence point of such mappings.

  5. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  6. System for corrosion monitoring in pipeline applying fuzzy logic mathematics

    NASA Astrophysics Data System (ADS)

    Kuzyakov, O. N.; Kolosova, A. L.; Andreeva, M. A.

    2018-05-01

    A list of factors influencing corrosion rate on the external side of underground pipeline is determined. Principles of constructing a corrosion monitoring system are described; the system performance algorithm and program are elaborated. A comparative analysis of methods for calculating corrosion rate is undertaken. Fuzzy logic mathematics is applied to reduce calculations while considering a wider range of corrosion factors.

  7. Using the Computer to Teach Methods and Interpretative Skills in the Humanities: Implementing a Project.

    ERIC Educational Resources Information Center

    Jones, Bruce William

    The results of implementing computer-assisted instruction (CAI) in two religion courses and a logic course at California State College, Bakersfield, are examined along with student responses. The main purpose of the CAI project was to teach interpretive skills. The most positive results came in the logic course. The programs in the New Testament…

  8. Coping with Logical Fallacies: A Developmental Training Program for Learning to Reason

    ERIC Educational Resources Information Center

    Christoforides, Michael; Spanoudis, George; Demetriou, Andreas

    2016-01-01

    This study trained children to master logical fallacies and examined how learning is related to processing efficiency and fluid intelligence (gf). A total of one hundred and eighty 8- and 11-year-old children living in Cyprus were allocated to a control, a limited (LI), and a full instruction (FI) group. The LI group learned the notion of logical…

  9. Derivation of sorting programs

    NASA Technical Reports Server (NTRS)

    Varghese, Joseph; Loganantharaj, Rasiah

    1990-01-01

    Program synthesis for critical applications has become a viable alternative to program verification. Nested resolution and its extension are used to synthesize a set of sorting programs from their first order logic specifications. A set of sorting programs, such as, naive sort, merge sort, and insertion sort, were successfully synthesized starting from the same set of specifications.

  10. Haskell before Haskell: Curry's Contribution to Programming (1946-1950)

    NASA Astrophysics Data System (ADS)

    de Mol, Liesbeth; Bullynck, Maarten; Carlé, Martin

    This paper discusses Curry's work on how to implement the problem of inverse interpolation on the ENIAC (1946) and his subsequent work on developing a theory of program composition (1948-1950). It is shown that Curry anticipated automatic programming and that his logical work influenced his composition of programs.

  11. The Programmable Calculator in the Classroom.

    ERIC Educational Resources Information Center

    Stolarz, Theodore J.

    The uses of programable calculators in the mathematics classroom are presented. A discussion of the "microelectronics revolution" that has brought programable calculators into our society is also included. Pointed out is that the logical or mental processes used to program the programable calculator are identical to those used to program…

  12. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  13. Diagnosable structured logic array

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling (Inventor); Miles, Lowell (Inventor); Gambles, Jody (Inventor); Maki, Gary K. (Inventor)

    2009-01-01

    A diagnosable structured logic array and associated process is provided. A base cell structure is provided comprising a logic unit comprising a plurality of input nodes, a plurality of selection nodes, and an output node, a plurality of switches coupled to the selection nodes, where the switches comprises a plurality of input lines, a selection line and an output line, a memory cell coupled to the output node, and a test address bus and a program control bus coupled to the plurality of input lines and the selection line of the plurality of switches. A state on each of the plurality of input nodes is verifiably loaded and read from the memory cell. A trusted memory block is provided. The associated process is provided for testing and verifying a plurality of truth table inputs of the logic unit.

  14. Integrated payload and mission planning, phase 3. Volume 2: Logic/Methodology for preliminary grouping of spacelab and mixed cargo payloads

    NASA Technical Reports Server (NTRS)

    Rodgers, T. E.; Johnson, J. F.

    1977-01-01

    The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.

  15. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  16. Computer-aided programming for message-passing system; Problems and a solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M.Y.; Gajski, D.D.

    1989-12-01

    As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs.

  17. 119. VIEW OF NORTH SIDE OF LANDLINE INSTRUMENTATION ROOM (206), ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    119. VIEW OF NORTH SIDE OF LANDLINE INSTRUMENTATION ROOM (206), LSB (BLDG. 751). POWER DISTRIBUTION UNITS AND CABLE DISTRIBUTION UNITS ON RIGHT SIDE OF PHOTO; LOGIC CONTROL AND MONITOR UNITS FOR BOOSTER AND FUEL SYSTEMS LEFT OF AND PARALLEL TO EAST ROW OF CABINETS; SIGNAL CONDITIONERS AT NORTH END OF ROOM PERPENDICULAR TO OTHER CABINETS. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  18. Common sense about taste: from mammals to insects.

    PubMed

    Yarmolinsky, David A; Zuker, Charles S; Ryba, Nicholas J P

    2009-10-16

    The sense of taste is a specialized chemosensory system dedicated to the evaluation of food and drink. Despite the fact that vertebrates and insects have independently evolved distinct anatomic and molecular pathways for taste sensation, there are clear parallels in the organization and coding logic between the two systems. There is now persuasive evidence that tastant quality is mediated by labeled lines, whereby distinct and strictly segregated populations of taste receptor cells encode each of the taste qualities.

  19. The Design and Performance Characteristics of a Cellular Logic 3-D Image Classification Processor.

    DTIC Science & Technology

    1981-04-01

    34 AGARD Proc. No. 94 on Artificiel Intelligence , 217: 1-13 (1971) 7. Golay, Marcel J. E. "Hexagonal Parallel Pattern Transformations." IEEE Trans. on...nonrandom nature of the data and features must be understood in order to intelligently select a reasonable three-dimensional noise filter. This completes...tactical targets which are located hundreds of meters away and are controlled and disguised by equally intelligent human beings, the difficulty of the

  20. Eigensolution of finite element problems in a completely connected parallel architecture

    NASA Technical Reports Server (NTRS)

    Akl, Fred A.; Morel, Michael R.

    1989-01-01

    A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.

  1. Composite faces are not (necessarily) processed coactively: A test using systems factorial technology and logical-rule models.

    PubMed

    Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R

    2018-06-01

    Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  3. Magnetic-field-controlled reconfigurable semiconductor logic.

    PubMed

    Joo, Sungjung; Kim, Taeyueb; Shin, Sang Hoon; Lim, Ju Young; Hong, Jinki; Song, Jin Dong; Chang, Joonyeon; Lee, Hyun-Woo; Rhie, Kungwon; Han, Suk Hee; Shin, Kyung-Ho; Johnson, Mark

    2013-02-07

    Logic devices based on magnetism show promise for increasing computational efficiency while decreasing consumed power. They offer zero quiescent power and yet combine novel functions such as programmable logic operation and non-volatile built-in memory. However, practical efforts to adapt a magnetic device to logic suffer from a low signal-to-noise ratio and other performance attributes that are not adequate for logic gates. Rather than exploiting magnetoresistive effects that result from spin-dependent transport of carriers, we have approached the development of a magnetic logic device in a different way: we use the phenomenon of large magnetoresistance found in non-magnetic semiconductors in high electric fields. Here we report a device showing a strong diode characteristic that is highly sensitive to both the sign and the magnitude of an external magnetic field, offering a reversible change between two different characteristic states by the application of a magnetic field. This feature results from magnetic control of carrier generation and recombination in an InSb p-n bilayer channel. Simple circuits combining such elementary devices are fabricated and tested, and Boolean logic functions including AND, OR, NAND and NOR are performed. They are programmed dynamically by external electric or magnetic signals, demonstrating magnetic-field-controlled semiconductor reconfigurable logic at room temperature. This magnetic technology permits a new kind of spintronic device, characterized as a current switch rather than a voltage switch, and provides a simple and compact platform for non-volatile reconfigurable logic devices.

  4. Early Grades Ideas.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1984

    1984-01-01

    Five computer-oriented classroom activities are suggested. They include: Logo programming to help students develop estimation, logic and spatial skills; creating flow charts; inputting data; making snowflakes using Logo; and developing and using a database management program. (JN)

  5. ITS logical architecture : traceability matrix.

    DOT National Transportation Integrated Search

    2003-11-01

    This document provides information to aid in understanding and using the Long-Term Pavement Performance (LTPP) program pavement performance database. This document provides an introduction to the structure of the LTPP program, the relational structur...

  6. Parallel solution of sparse one-dimensional dynamic programming problems

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1989-01-01

    Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.

  7. 76 FR 66309 - Pilot Program for Parallel Review of Medical Products; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS-3180-N2] Food and Drug Administration [Docket No. FDA-2010-N-0308] Pilot Program for Parallel Review of Medical... technologies to participate in a program of parallel FDA-CMS review. The document was published with an...

  8. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  9. Towards a molecular logic machine

    NASA Astrophysics Data System (ADS)

    Remacle, F.; Levine, R. D.

    2001-06-01

    Finite state logic machines can be realized by pump-probe spectroscopic experiments on an isolated molecule. The most elaborate setup, a Turing machine, can be programmed to carry out a specific computation. We argue that a molecule can be similarly programmed, and provide examples using two photon spectroscopies. The states of the molecule serve as the possible states of the head of the Turing machine and the physics of the problem determines the possible instructions of the program. The tape is written in an alphabet that allows the listing of the different pump and probe signals that are applied in a given experiment. Different experiments using the same set of molecular levels correspond to different tapes that can be read and processed by the same head and program. The analogy to a Turing machine is not a mechanical one and is not completely molecular because the tape is not part of the molecular machine. We therefore also discuss molecular finite state machines, such as sequential devices, for which the tape is not part of the machine. Nonmolecular tapes allow for quite long input sequences with a rich alphabet (at the level of 7 bits) and laser pulse shaping experiments provide concrete examples. Single molecule spectroscopies show that a single molecule can be repeatedly cycled through a logical operation.

  10. Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.

    PubMed

    Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio

    2014-07-05

    A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems. Copyright © 2014 Wiley Periodicals, Inc.

  11. Energy-Efficient Wide Datapath Integer Arithmetic Logic Units Using Superconductor Logic

    NASA Astrophysics Data System (ADS)

    Ayala, Christopher Lawrence

    Complementary Metal-Oxide-Semiconductor (CMOS) technology is currently the most widely used integrated circuit technology today. As CMOS approaches the physical limitations of scaling, it is unclear whether or not it can provide long-term support for niche areas such as high-performance computing and telecommunication infrastructure, particularly with the emergence of cloud computing. Alternatively, superconductor technologies based on Josephson junction (JJ) switching elements such as Rapid Single Flux Quantum (RSFQ) logic and especially its new variant, Energy-Efficient Rapid Single Flux Quantum (ERSFQ) logic have the capability to provide an ultra-high-speed, low power platform for digital systems. The objective of this research is to design and evaluate energy-efficient, high-speed 32-bit integer Arithmetic Logic Units (ALUs) implemented using RSFQ and ERSFQ logic as the first steps towards achieving practical Very-Large-Scale-Integration (VLSI) complexity in digital superconductor electronics. First, a tunable VHDL superconductor cell library is created to provide a mechanism to conduct design exploration and evaluation of superconductor digital circuits from the perspectives of functionality, complexity, performance, and energy-efficiency. Second, hybrid wave-pipelining techniques developed earlier for wide datapath RSFQ designs have been used for efficient arithmetic and logic circuit implementations. To develop the core foundation of the ALU, the ripple-carry adder and the Kogge-Stone parallel prefix carry look-ahead adder are studied as representative candidates on opposite ends of the design spectrum. By combining the high-performance features of the Kogge-Stone structure and the low complexity of the ripple-carry adder, a 32-bit asynchronous wave-pipelined hybrid sparse-tree ALU has been designed and evaluated using the VHDL cell library tuned to HYPRES' gate-level characteristics. The designs and techniques from this research have been implemented using RSFQ logic and prototype chips have been fabricated. As a joint work with HYPRES, a 20 GHz 8-bit Kogge-Stone ALU consisting of 7,950 JJs total has been fabricated using a 1.5 μm 4.5 kA/cm2 process and fully demonstrated. An 8-bit sparse-tree ALU (8,832 JJs total) and a 16-bit sparse-tree adder (12,785 JJs total) have also been fabricated using a 1.0 μm 10 kA/cm 2 process and demonstrated under collaboration with Yokohama National University and Nagoya University (Japan).

  12. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  13. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  14. ASICs Approach for the Implementation of a Symmetric Triangular Fuzzy Coprocessor and Its Application to Adaptive Filtering

    NASA Technical Reports Server (NTRS)

    Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan

    1997-01-01

    This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.

  15. I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.

    PubMed

    Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S

    2017-12-01

    The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Organizational Change Around an Older Workforce.

    PubMed

    Moen, Phyllis; Kojola, Erik; Schaefers, Kate

    2017-10-01

    Demographic, economic, political, and technological transformations-including an unprecedented older workforce-are challenging outdated human resource logics and practices. Rising numbers of retirement-eligible Boomers portend a loss of talent, skills, and local knowledge. We investigate organizational responses to this challenge-institutional work disrupting age-graded mindsets and policies. We focus on innovative U.S. organizations in the Minneapolis-St. Paul region in the state of Minnesota, a hub for businesses and nonprofits, conducting in-depth interviews with informants from a purposive sample of 23 for-profit, nonprofit, and government organizations. Drawing on an organizational change theoretical approach, we find organizations are leading change by developing universal policies and practices, not ones intentionally geared to older workers. Both their narratives and strategies-opportunities for greater employee flexibility, training, and scaling back time commitments-suggest deliberate disrupting of established age-graded logics, replacing them with new logics valuing older workers and age-neutral approaches. Organizations in the different sectors studied are fashioning uniform policies regardless of age, exhibiting a parallel reluctance to delineate special policies for older workers. Developing new organizational logics and practices valuing, investing in, and retaining older workers is key 21st century business challenges. The flexibility, training, and alternative pathways offered by the innovative organizations we studied point to fruitful possibilities for large-scale replacement of outdated age-biased templates of work, careers, and retirement. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  18. Rethinking Social Barriers to Effective Adaptive Management.

    PubMed

    West, Simon; Schultz, Lisen; Bekessy, Sarah

    2016-09-01

    Adaptive management is an approach to environmental management based on learning-by-doing, where complexity, uncertainty, and incomplete knowledge are acknowledged and management actions are treated as experiments. However, while adaptive management has received significant uptake in theory, it remains elusively difficult to enact in practice. Proponents have blamed social barriers and have called for social science contributions. We address this gap by adopting a qualitative approach to explore the development of an ecological monitoring program within an adaptive management framework in a public land management organization in Australia. We ask what practices are used to enact the monitoring program and how do they shape learning? We elicit a rich narrative through extensive interviews with a key individual, and analyze the narrative using thematic analysis. We discuss our results in relation to the concept of 'knowledge work' and Westley's (2002) framework for interpreting the strategies of adaptive managers-'managing through, in, out and up.' We find that enacting the program is conditioned by distinct and sometimes competing logics-scientific logics prioritizing experimentation and learning, public logics emphasizing accountability and legitimacy, and corporate logics demanding efficiency and effectiveness. In this context, implementing adaptive management entails practices of translation to negotiate tensions between objective and situated knowledge, external experts and organizational staff, and collegiate and hierarchical norms. Our contribution embraces the 'doing' of learning-by-doing and marks a shift from conceptualizing the social as an external barrier to adaptive management to be removed to an approach that situates adaptive management as social knowledge practice.

  19. DMA shared byte counters in a parallel computer

    DOEpatents

    Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos

    2010-04-06

    A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.

  20. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Nikzad, Shouleh (Inventor); Monacos, Steve P. (Inventor); Hoenk, Michael E. (Inventor)

    2013-01-01

    Apparatuses and methods are disclosed that create a synthetic fovea in order to identify and highlight interesting portions of an image for further processing and rapid response. Synthetic foveal imaging implements a parallel processing architecture that uses reprogrammable logic to implement embedded, distributed, real-time foveal image processing from different sensor types while simultaneously allowing for lossless storage and retrieval of raw image data. Real-time, distributed, adaptive processing of multi-tap image sensors with coordinated processing hardware used for each output tap is enabled. In mosaic focal planes, a parallel-processing network can be implemented that treats the mosaic focal plane as a single ensemble rather than a set of isolated sensors. Various applications are enabled for imaging and robotic vision where processing and responding to enormous amounts of data quickly and efficiently is important.

Top