Sample records for memory programming model

  1. An Investigation of Unified Memory Access Performance in CUDA

    PubMed Central

    Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin

    2015-01-01

    Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668

  2. A model of attention-guided visual perception and recognition.

    PubMed

    Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A

    1998-08-01

    A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.

  3. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  4. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  5. ORCA Project: Research on high-performance parallel computer programming environments. Final report, 1 Apr-31 Mar 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, L.; Notkin, D.; Adams, L.

    1990-03-31

    This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less

  6. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    PubMed

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.

  7. A New Extension Model: The Memorial Middle School Agricultural Extension and Education Center

    ERIC Educational Resources Information Center

    Skelton, Peter; Seevers, Brenda

    2010-01-01

    The Memorial Middle School Agricultural Extension and Education Center is a new model for Extension. The center applies the Cooperative Extension Service System philosophy and mission to developing public education-based programs. Programming primarily serves middle school students and teachers through agricultural and natural resource science…

  8. An Interactive Simulation Program for Exploring Computational Models of Auto-Associative Memory.

    PubMed

    Fink, Christian G

    2017-01-01

    While neuroscience students typically learn about activity-dependent plasticity early in their education, they often struggle to conceptually connect modification at the synaptic scale with network-level neuronal dynamics, not to mention with their own everyday experience of recalling a memory. We have developed an interactive simulation program (based on the Hopfield model of auto-associative memory) that enables the user to visualize the connections generated by any pattern of neural activity, as well as to simulate the network dynamics resulting from such connectivity. An accompanying set of student exercises introduces the concepts of pattern completion, pattern separation, and sparse versus distributed neural representations. Results from a conceptual assessment administered before and after students worked through these exercises indicate that the simulation program is a useful pedagogical tool for illustrating fundamental concepts of computational models of memory.

  9. Effect of virtual memory on efficient solution of two model problems

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1977-01-01

    Computers with virtual memory architecture allow programs to be written as if they were small enough to be contained in memory. Two types of problems are investigated to show that this luxury can lead to quite an inefficient performance if the programmer does not interact strongly with the characteristics of the operating system when developing the program. The two problems considered are the simultaneous solutions of a large linear system of equations by Gaussian elimination and a model three-dimensional finite-difference problem. The Control Data STAR-100 computer runs are made to demonstrate the inefficiencies of programming the problems in the manner one would naturally do if the problems were indeed, small enough to be contained in memory. Program redesigns are presented which achieve large improvements in performance through changes in the computational procedure and the data base arrangement.

  10. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  11. Modeling of SONOS Memory Cell Erase Cycle

    NASA Technical Reports Server (NTRS)

    Phillips, Thomas A.; MacLeod, Todd C.; Ho, Fat H.

    2011-01-01

    Utilization of Silicon-Oxide-Nitride-Oxide-Silicon (SONOS) nonvolatile semiconductor memories as a flash memory has many advantages. These electrically erasable programmable read-only memories (EEPROMs) utilize low programming voltages, have a high erase/write cycle lifetime, are radiation hardened, and are compatible with high-density scaled CMOS for low power, portable electronics. In this paper, the SONOS memory cell erase cycle was investigated using a nonquasi-static (NQS) MOSFET model. Comparisons were made between the model predictions and experimental data.

  12. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  13. Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.

  14. Global Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  15. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  16. SONOS Nonvolatile Memory Cell Programming Characteristics

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.

    2010-01-01

    Silicon-oxide-nitride-oxide-silicon (SONOS) nonvolatile memory is gaining favor over conventional EEPROM FLASH memory technology. This paper characterizes the SONOS write operation using a nonquasi-static MOSFET model. This includes floating gate charge and voltage characteristics as well as tunneling current, voltage threshold and drain current characterization. The characterization of the SONOS memory cell predicted by the model closely agrees with experimental data obtained from actual SONOS memory cells. The tunnel current, drain current, threshold voltage and read drain current all closely agreed with empirical data.

  17. Comparison of two paradigms for distributed shared memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.

    1990-08-01

    The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less

  18. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  19. Modeling of Sonos Memory Cell Erase Cycle

    NASA Technical Reports Server (NTRS)

    Phillips, Thomas A.; MacLeond, Todd C.; Ho, Fat D.

    2010-01-01

    Silicon-oxide-nitride-oxide-silicon (SONOS) nonvolatile semiconductor memories (NVSMS) have many advantages. These memories are electrically erasable programmable read-only memories (EEPROMs). They utilize low programming voltages, endure extended erase/write cycles, are inherently resistant to radiation, and are compatible with high-density scaled CMOS for low power, portable electronics. The SONOS memory cell erase cycle was investigated using a nonquasi-static (NQS) MOSFET model. The SONOS floating gate charge and voltage, tunneling current, threshold voltage, and drain current were characterized during an erase cycle. Comparisons were made between the model predictions and experimental device data.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, L.C.

    The Prickett and Lonnquist two-dimensional groundwater model has been programmed for the Apple II minicomputer. Both leaky and nonleaky confined aquifers can be simulated. The model was adapted from the FORTRAN version of Prickett and Lonnquist. In the configuration presented here, the program requires 64 K bits of memory. Because of the large number of arrays used in the program, and memory limitations of the Apple II, the maximum grid size that can be used is 20 rows by 20 columns. Input to the program is interactive, with prompting by the computer. Output consists of predicted lead values at themore » row-column intersections (nodes).« less

  1. Flexible language constructs for large parallel programs

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Schnabel, Robert

    1993-01-01

    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.

  2. Frequent Statement and Dereference Elimination for Imperative and Object-Oriented Distributed Programs

    PubMed Central

    El-Zawawy, Mohamed A.

    2014-01-01

    This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098

  3. An Apple II Implementation of Man-Mod Manpower Planning Model.

    DTIC Science & Technology

    1982-03-01

    next page. It is highly recommended, to prevent the loss of data, that the user save the data at this point. If Choice (1 ), yes, is selected, the...approximately 30 seconds, but will clear and reload memory preventing any inadvertent memory changes which might cause program interruptions or erroneous cal... prgram . 70 MAN-MOD/PROGRAM (PROGRAM LISTING) 1000 REM MAN-MOD/PROGRAM PROGRAM: "FOR" IS IN QUOTES IN LINES 1004,10518,10520,10524,10526,10528,1072

  4. Flexible Language Constructs for Large Parallel Programs

    DOE PAGES

    Rosing, Matt; Schnabel, Robert

    1994-01-01

    The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less

  5. Modeling the Coupled Chemo-Thermo-Mechanical Behavior of Amorphous Polymer Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Jonathan A.; Nguyen, Thao D.; Xiao, Rui

    2015-02-01

    Amorphous polymers exhibit a rich landscape of time-dependent behavior including viscoelasticity, structural relaxation, and viscoplasticity. These time-dependent mechanisms can be exploited to achieve shape-memory behavior, which allows the material to store a programmed deformed shape indefinitely and to recover entirely the undeformed shape in response to specific environmental stimulus. The shape-memory performance of amorphous polymers depends on the coordination of multiple physical mechanisms, and considerable opportunities exist to tailor the polymer structure and shape-memory programming procedure to achieve the desired performance. The goal of this project was to use a combination of theoretical, numerical and experimental methods to investigate themore » effect of shape memory programming, thermo-mechanical properties, and physical and environmental aging on the shape memory performance. Physical and environmental aging occurs during storage and through exposure to solvents, such as water, and can significantly alter the viscoelastic behavior and shape memory behavior of amorphous polymers. This project – executed primarily by Professor Thao Nguyen and Graduate Student Rui Xiao at Johns Hopkins University in support of a DOE/NNSA Presidential Early Career Award in Science and Engineering (PECASE) – developed a theoretical framework for chemothermo- mechanical behavior of amorphous polymers to model the effects of physical aging and solvent-induced environmental factors on their thermoviscoelastic behavior.« less

  6. UPC++ Programmer’s Guide (v1.0 2017.9)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, J.; Baden, S.; Bonachea, D.

    UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, allmore » operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less

  7. UPC++ Programmer’s Guide, v1.0-2018.3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, J.; Baden, S.; Bonachea, Dan

    UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operationsmore » that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less

  8. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    USGS Publications Warehouse

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  9. How To Create and Conduct a Memory Enhancement Program.

    ERIC Educational Resources Information Center

    Meyer, Genevieve R.; Ober-Reynolds, Sharman

    This report describes Memory Enhancement Group workshops which have been conducted at the Senior Health and Peer Counseling Center in Santa Monica, California and gives basic data regarding outcomes of the workshops. It provides a model of memory as a three-step process of registration or becoming aware, consolidation, and retrieval. It presents…

  10. What Multilevel Parallel Programs do when you are not Watching: A Performance Analysis Case Study Comparing MPI/OpenMP, MLP, and Nested OpenMP

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.

  11. Self-defining memories, scripts, and the life story: narrative identity in personality and psychotherapy.

    PubMed

    Singer, Jefferson A; Blagov, Pavel; Berry, Meredith; Oost, Kathryn M

    2013-12-01

    An integrative model of narrative identity builds on a dual memory system that draws on episodic memory and a long-term self to generate autobiographical memories. Autobiographical memories related to critical goals in a lifetime period lead to life-story memories, which in turn become self-defining memories when linked to an individual's enduring concerns. Self-defining memories that share repetitive emotion-outcome sequences yield narrative scripts, abstracted templates that filter cognitive-affective processing. The life story is the individual's overarching narrative that provides unity and purpose over the life course. Healthy narrative identity combines memory specificity with adaptive meaning-making to achieve insight and well-being, as demonstrated through a literature review of personality and clinical research, as well as new findings from our own research program. A clinical case study drawing on this narrative identity model is also presented with implications for treatment and research. © 2012 Wiley Periodicals, Inc.

  12. Summary Report for ASC L2 Milestone #4782: Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, J. R.; Hornung, R.; Black, A.

    This document serves as a detailed companion to the powerpoint slides presented as part of the ASC L2 milestone review for Integrated Codes milestone #4782 titled “Assess Newly Emerging Programming and Memory Models for Advanced Architectures on Integrated Codes”, due on 9/30/2014, and presented for formal program review on 9/12/2014. The program review committee is represented by Mike Zika (A Program Project Lead for Kull), Brian Pudliner (B Program Project Lead for Ares), Scott Futral (DEG Group Lead in LC), and Mike Glass (Sierra Project Lead at Sandia). This document, along with the presentation materials, and a letter of completionmore » signed by the review committee will act as proof of completion for this milestone.« less

  13. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  14. The role of memory in the relationship between attention toward thin-ideal media and body dissatisfaction.

    PubMed

    Jiang, Michelle Y W; Vartanian, Lenny R

    2016-03-01

    This study examined the causal relationship between attention and memory bias toward thin-body images, and the indirect effect of attending to thin-body images on women's body dissatisfaction via memory. In a 2 (restrained vs. unrestrained eaters) × 2 (long vs. short exposure) quasi-experimental design, female participants (n = 90) were shown images of thin models for either 7 s or 150 ms, and then completed a measure of body dissatisfaction and a recognition test to assess their memory for the images. Both restrained and unrestrained eaters in the long exposure condition had better recognition memory for images of thin models than did those in the short exposure condition. Better recognition memory for images of thin models was associated with lower body dissatisfaction. Finally, exposure duration to images of thin models had an indirect effect on body dissatisfaction through recognition memory. These findings suggest that memory for body-related information may be more critical in influencing women's body image than merely the exposure itself, and that targeting memory bias might enhance the effectiveness of cognitive bias modification programs.

  15. An OpenACC-Based Unified Programming Model for Multi-accelerator Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.

  16. Hard Real-Time: C++ Versus RTSJ

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Reinholtz, William K.

    2004-01-01

    In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.

  17. Using Abstraction in Explicity Parallel Programs.

    DTIC Science & Technology

    1991-07-01

    However, we only rely on sequential consistency of memory operations. includ- ing reads. writes and any synchronization primitives provided by the...explicit synchronization primitives . This demonstrates the practical power of sequentially consistent memory, as opposed to weaker models of memory that...a small set of synchronization primitives , all pro- cedures have non-waiting specifications. This is in contrast to richer process-oriented

  18. Scheduling for Locality in Shared-Memory Multiprocessors

    DTIC Science & Technology

    1993-05-01

    Submitted in Partial Fulfillment of the Requirements for the Degree ’)iIC Q(JALfryT INSPECTED 5 DOCTOR OF PHILOSOPHY I Accesion For Supervised by NTIS CRAM... architecture on parallel program performance, explain the implications of this trend on popular parallel programming models, and propose system software to 0...decomoosition and scheduling algorithms. I. SUIUECT TERMS IS. NUMBER OF PAGES shared-memory multiprocessors; architecture trends; loop 110 scheduling

  19. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    NASA Technical Reports Server (NTRS)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  20. Resonator memories and optical novelty filters

    NASA Astrophysics Data System (ADS)

    Anderson, Dana Z.; Erle, Marie C.

    Optical resonators having holographic elements are potential candidates for storing information that can be accessed through content addressable or associative recall. Closely related to the resonator memory is the optical novelty filter, which can detect the differences between a test object and a set of reference objects. We discuss implementations of these devices using continuous optical media such as photorefractive materials. The discussion is framed in the context of neural network models. There are both formal and qualitative similarities between the resonator memory and optical novelty filter and network models. Mode competition arises in the theory of the resonator memory, much as it does in some network models. We show that the role of the phenomena of "daydreaming" in the real-time programmable optical resonator is very much akin to the role of "unlearning" in neural network memories. The theory of programming the real-time memory for a single mode is given in detail. This leads to a discussion of the optical novelty filter. Experimental results for the resonator memory, the real-time programmable memory, and the optical tracking novelty filter are reviewed. We also point to several issues that need to be addressed in order to implement more formal models of neural networks.

  1. Resonator Memories And Optical Novelty Filters

    NASA Astrophysics Data System (ADS)

    Anderson, Dana Z.; Erie, Marie C.

    1987-05-01

    Optical resonators having holographic elements are potential candidates for storing information that can be accessed through content-addressable or associative recall. Closely related to the resonator memory is the optical novelty filter, which can detect the differences between a test object and a set of reference objects. We discuss implementations of these devices using continuous optical media such as photorefractive ma-terials. The discussion is framed in the context of neural network models. There are both formal and qualitative similarities between the resonator memory and optical novelty filter and network models. Mode competition arises in the theory of the resonator memory, much as it does in some network models. We show that the role of the phenomena of "daydream-ing" in the real-time programmable optical resonator is very much akin to the role of "unlearning" in neural network memories. The theory of programming the real-time memory for a single mode is given in detail. This leads to a discussion of the optical novelty filter. Experimental results for the resonator memory, the real-time programmable memory, and the optical tracking novelty filter are reviewed. We also point to several issues that need to be addressed in order to implement more formal models of neural networks.

  2. Projected phase-change memory devices.

    PubMed

    Koelmans, Wabe W; Sebastian, Abu; Jonnalagadda, Vara Prasad; Krebs, Daniel; Dellmann, Laurent; Eleftheriou, Evangelos

    2015-09-03

    Nanoscale memory devices, whose resistance depends on the history of the electric signals applied, could become critical building blocks in new computing paradigms, such as brain-inspired computing and memcomputing. However, there are key challenges to overcome, such as the high programming power required, noise and resistance drift. Here, to address these, we present the concept of a projected memory device, whose distinguishing feature is that the physical mechanism of resistance storage is decoupled from the information-retrieval process. We designed and fabricated projected memory devices based on the phase-change storage mechanism and convincingly demonstrate the concept through detailed experimentation, supported by extensive modelling and finite-element simulations. The projected memory devices exhibit remarkably low drift and excellent noise performance. We also demonstrate active control and customization of the programming characteristics of the device that reliably realize a multitude of resistance states.

  3. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  4. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  5. Hybrid MPI+OpenMP Programming of an Overset CFD Solver and Performance Investigations

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Jin, Haoqiang H.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    This report describes a two level parallelization of a Computational Fluid Dynamic (CFD) solver with multi-zone overset structured grids. The approach is based on a hybrid MPI+OpenMP programming model suitable for shared memory and clusters of shared memory machines. The performance investigations of the hybrid application on an SGI Origin2000 (O2K) machine is reported using medium and large scale test problems.

  6. Block-Parallel Data Analysis with DIY2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less

  7. Hybrid-view programming of nuclear fusion simulation code in the PGAS parallel programming language XcalableMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi

    Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less

  8. Hybrid-view programming of nuclear fusion simulation code in the PGAS parallel programming language XcalableMP

    DOE PAGES

    Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...

    2016-06-01

    Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less

  9. A portable approach for PIC on emerging architectures

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor

    2016-03-01

    A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.

  10. Scaling Irregular Applications through Data Aggregation and Software Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Tumeo, Antonino; Chavarría-Miranda, Daniel

    Bioinformatics, data analytics, semantic databases, knowledge discovery are emerging high performance application areas that exploit dynamic, linked data structures such as graphs, unbalanced trees or unstructured grids. These data structures usually are very large, requiring significantly more memory than available on single shared memory systems. Additionally, these data structures are difficult to partition on distributed memory systems. They also present poor spatial and temporal locality, thus generating unpredictable memory and network accesses. The Partitioned Global Address Space (PGAS) programming model seems suitable for these applications, because it allows using a shared memory abstraction across distributed-memory clusters. However, current PGAS languagesmore » and libraries are built to target regular remote data accesses and block transfers. Furthermore, they usually rely on the Single Program Multiple Data (SPMD) parallel control model, which is not well suited to the fine grained, dynamic and unbalanced parallelism of irregular applications. In this paper we present {\\bf GMT} (Global Memory and Threading library), a custom runtime library that enables efficient execution of irregular applications on commodity clusters. GMT integrates a PGAS data substrate with simple fork/join parallelism and provides automatic load balancing on a per node basis. It implements multi-level aggregation and lightweight multithreading to maximize memory and network bandwidth with fine-grained data accesses and tolerate long data access latencies. A key innovation in the GMT runtime is its thread specialization (workers, helpers and communication threads) that realize the overall functionality. We compare our approach with other PGAS models, such as UPC running using GASNet, and hand-optimized MPI code on a set of typical large-scale irregular applications, demonstrating speedups of an order of magnitude.« less

  11. Programming Models for Concurrency and Real-Time

    NASA Astrophysics Data System (ADS)

    Vitek, Jan

    Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.

  12. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  13. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard againstmore » a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.« less

  15. Enhancing memory self-efficacy during menopause through a group memory strategies program.

    PubMed

    Unkenstein, Anne E; Bei, Bei; Bryant, Christina A

    2017-05-01

    Anxiety about memory during menopause can affect quality of life. We aimed to improve memory self-efficacy during menopause using a group memory strategies program. The program was run five times for a total of 32 peri- and postmenopausal women, age between 47 and 60 years, recruited from hospital menopause and gynecology clinics. The 4-week intervention consisted of weekly 2-hour sessions, and covered how memory works, memory changes related to ageing, health and lifestyle factors, and specific memory strategies. Memory contentment (CT), reported frequency of forgetting (FF), use of memory strategies, psychological distress, and attitude toward menopause were measured. A double-baseline design was applied, with outcomes measured on two baseline occasions (1-month prior [T1] and in the first session [T2]), immediately postintervention (T3), and 3-month postintervention (T4). To describe changes in each variable between time points paired sample t tests were conducted. Mixed-effects models comparing the means of random slopes from T2 to T3 with those from T1 to T2 were conducted for each variable to test for treatment effects. Examination of the naturalistic changes in outcome measures from T1 to T2 revealed no significant changes (all Ps > 0.05). CT, reported FF, and use of memory strategies improved significantly more from T2 to T3, than from T1 to T2 (all Ps < 0.05). Neither attitude toward menopause nor psychological distress improved significantly more postintervention than during the double-baseline (all Ps > 0.05). Improvements in reported CT and FF were maintained after 3 months. The use of group interventions to improve memory self-efficacy during menopause warrants continued evaluation.

  16. Adiabatic quantum optimization for associative memory recall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seddiqi, Hadayat; Humble, Travis S.

    Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less

  17. Adiabatic Quantum Optimization for Associative Memory Recall

    NASA Astrophysics Data System (ADS)

    Seddiqi, Hadayat; Humble, Travis

    2014-12-01

    Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are stored in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.

  18. Adiabatic quantum optimization for associative memory recall

    DOE PAGES

    Seddiqi, Hadayat; Humble, Travis S.

    2014-12-22

    Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less

  19. Facilitating change in health-related behaviors and intentions: a randomized controlled trial of a multidimensional memory program for older adults.

    PubMed

    Wiegand, Melanie A; Troyer, Angela K; Gojmerac, Christina; Murphy, Kelly J

    2013-01-01

    Many older adults are concerned about memory changes with age and consequently seek ways to optimize their memory function. Memory programs are known to be variably effective in improving memory knowledge, other aspects of metamemory, and/or objective memory, but little is known about their impact on implementing and sustaining lifestyle and healthcare-seeking intentions and behaviors. We evaluated a multidimensional, evidence-based intervention, the Memory and Aging Program, that provides education about memory and memory change, training in the use of practical memory strategies, and support for implementation of healthy lifestyle behavior changes. In a randomized controlled trial, 42 healthy older adults participated in a program (n = 21) or a waitlist control (n = 21) group. Relative to the control group, participants in the program implemented more healthy lifestyle behaviors by the end of the program and maintained these changes 1 month later. Similarly, program participants reported a decreased intention to seek unnecessary medical attention for their memory immediately after the program and 1 month later. Findings support the use of multidimensional memory programs to promote healthy lifestyles and influence healthcare-seeking behaviors. Discussion focuses on implications of these changes for maximizing cognitive health and minimizing impact on healthcare resources.

  20. Efficient iteration in data-parallel programs with irregular and dynamically distributed data structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Littlefield, R.J.

    1990-02-01

    To implement an efficient data-parallel program on a non-shared memory MIMD multicomputer, data and computations must be properly partitioned to achieve good load balance and locality of reference. Programs with irregular data reference patterns often require irregular partitions. Although good partitions may be easy to determine, they can be difficult or impossible to implement in programming languages that provide only regular data distributions, such as blocked or cyclic arrays. We are developing Onyx, a programming system that provides a shared memory model of distributed data structures and extends the concept of data distribution to include irregular and dynamic distributions. Thismore » provides a powerful means to specify irregular partitions. Perhaps surprisingly, programs using it can also execute efficiently. In this paper, we describe and evaluate the Onyx implementation of a model problem that repeatedly executes an irregular but fixed data reference pattern. On an NCUBE hypercube, the speed of the Onyx implementation is comparable to that of carefully handwritten message-passing code.« less

  1. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  2. Nonvolatile Memory Technology for Space Applications

    NASA Technical Reports Server (NTRS)

    Oldham, Timothy R.; Irom, Farokh; Friendlich, Mark; Nguyen, Duc; Kim, Hak; Berg, Melanie; LaBel, Kenneth A.

    2010-01-01

    This slide presentation reviews several forms of nonvolatile memory for use in space applications. The intent is to: (1) Determine inherent radiation tolerance and sensitivities, (2) Identify challenges for future radiation hardening efforts, (3) Investigate new failure modes and effects, and technology modeling programs. Testing includes total dose, single event (proton, laser, heavy ion), and proton damage (where appropriate). Test vehicles are expected to be a variety of non-volatile memory devices as available including Flash (NAND and NOR), Charge Trap, Nanocrystal Flash, Magnetic Memory (MRAM), Phase Change--Chalcogenide, (CRAM), Ferroelectric (FRAM), CNT, and Resistive RAM.

  3. Study of self-compliance behaviors and internal filament characteristics in intrinsic SiO{sub x}-based resistive switching memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yao-Feng, E-mail: yfchang@utexas.edu; Zhou, Fei; Chen, Ying-Chen

    2016-01-18

    Self-compliance characteristics and reliability optimization are investigated in intrinsic unipolar silicon oxide (SiO{sub x})-based resistive switching (RS) memory using TiW/SiO{sub x}/TiW device structures. The program window (difference between SET voltage and RESET voltage) is dependent on external series resistance, demonstrating that the SET process is due to a voltage-triggered mechanism. The program window has been optimized for program/erase disturbance immunity and reliability for circuit-level applications. The SET and RESET transitions have also been characterized using a dynamic conductivity method, which distinguishes the self-compliance behavior due to an internal series resistance effect (filament) in SiO{sub x}-based RS memory. By using amore » conceptual “filament/resistive gap (GAP)” model of the conductive filament and a proton exchange model with appropriate assumptions, the internal filament resistance and GAP resistance can be estimated for high- and low-resistance states (HRS and LRS), and are found to be independent of external series resistance. Our experimental results not only provide insights into potential reliability issues but also help to clarify the switching mechanisms and device operating characteristics of SiO{sub x}-based RS memory.« less

  4. Modeling the glass transition of amorphous networks for shape-memory behavior

    NASA Astrophysics Data System (ADS)

    Xiao, Rui; Choi, Jinwoo; Lakhera, Nishant; Yakacki, Christopher M.; Frick, Carl P.; Nguyen, Thao D.

    2013-07-01

    In this paper, a thermomechanical constitutive model was developed for the time-dependent behaviors of the glass transition of amorphous networks. The model used multiple discrete relaxation processes to describe the distribution of relaxation times for stress relaxation, structural relaxation, and stress-activated viscous flow. A non-equilibrium thermodynamic framework based on the fictive temperature was introduced to demonstrate the thermodynamic consistency of the constitutive theory. Experimental and theoretical methods were developed to determine the parameters describing the distribution of stress and structural relaxation times and the dependence of the relaxation times on temperature, structure, and driving stress. The model was applied to study the effects of deformation temperatures and physical aging on the shape-memory behavior of amorphous networks. The model was able to reproduce important features of the partially constrained recovery response observed in experiments. Specifically, the model demonstrated a strain-recovery overshoot for cases programmed below Tg and subjected to a constant mechanical load. This phenomenon was not observed for materials programmed above Tg. Physical aging, in which the material was annealed for an extended period of time below Tg, shifted the activation of strain recovery to higher temperatures and increased significantly the initial recovery rate. For fixed-strain recovery, the model showed a larger overshoot in the stress response for cases programmed below Tg, which was consistent with previous experimental observations. Altogether, this work demonstrates how an understanding of the time-dependent behaviors of the glass transition can be used to tailor the temperature and deformation history of the shape-memory programming process to achieve more complex shape recovery pathways, faster recovery responses, and larger activation stresses.

  5. Regulators of Long-Term Memory Revealed by Mushroom Body-Specific Gene Expression Profiling in Drosophila melanogaster.

    PubMed

    Widmer, Yves F; Bilican, Adem; Bruggmann, Rémy; Sprecher, Simon G

    2018-06-20

    Memory formation is achieved by genetically tightly controlled molecular pathways that result in a change of synaptic strength and synapse organization. While for short-term memory traces rapidly acting biochemical pathways are in place, the formation of long-lasting memories requires changes in the transcriptional program of a cell. Although many genes involved in learning and memory formation have been identified, little is known about the genetic mechanisms required for changing the transcriptional program during different phases of long-term memory formation. With Drosophila melanogaster as a model system we profiled transcriptomic changes in the mushroom body, a memory center in the fly brain, at distinct time intervals during appetitive olfactory long-term memory formation using the targeted DamID technique. We describe the gene expression profiles during these phases and tested 33 selected candidate genes for deficits in long-term memory formation using RNAi knockdown. We identified 10 genes that enhance or decrease memory when knocked-down in the mushroom body. For vajk-1 and hacd1 , the two strongest hits, we gained further support for their crucial role in appetitive learning and forgetting. These findings show that profiling gene expression changes in specific cell-types harboring memory traces provides a powerful entry point to identify new genes involved in learning and memory. The presented transcriptomic data may further be used as resource to study genes acting at different memory phases. Copyright © 2018, Genetics.

  6. Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator

    DOEpatents

    Asaad, Sameh W.; Kapur, Mohit

    2016-03-15

    A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.

  7. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  8. CELLFS: TAKING THE "DMA" OUT OF CELL PROGRAMMING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    IONKOV, LATCHESAR A.; MIRTCHOVSKI, ANDREY A.; NYRHINEN, AKI M.

    In this paper we present a new programming model for the Cell BE architecture of scalar multiprocessors. They call this programming model CellFS. CellFS aims at simplifying the task of managing I/O between the local store of the processing units and main memory. The CellFS support library provides the means for transferring data via simple file I/O operations between the PPU and the SPU.

  9. Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Palermo, Daniel J.; Banerjee, Prithviraj

    1995-01-01

    For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.

  10. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  11. Test program for 4-K memory card, JOLT microprocessor

    NASA Technical Reports Server (NTRS)

    Lilley, R. W.

    1976-01-01

    A memory test program is described for use with the JOLT microcomputer 4,096-word memory board used in development of an Omega navigation receiver. The program allows a quick test of the memory board by cycling the memory through all possible bit combinations in all words.

  12. Medical Music Therapy: A Model Program for Clinical Practice, Education, Training and Research

    ERIC Educational Resources Information Center

    Standley, Jayne

    2005-01-01

    This monograph evolved from the unique, innovative partnership between the Florida State University Music Therapy Program and Tallahassee Memorial HealthCare. Its purpose is to serve as a model for music therapy educators, students, clinicians, and the hospital administrators who might employ them. This book should prove a valuable resource for…

  13. Total Ionizing Dose Influence on the Single Event Effect Sensitivity in Samsung 8Gb NAND Flash Memories

    NASA Astrophysics Data System (ADS)

    Edmonds, Larry D.; Irom, Farokh; Allen, Gregory R.

    2017-08-01

    A recent model provides risk estimates for the deprogramming of initially programmed floating gates via prompt charge loss produced by an ionizing radiation environment. The environment can be a mixture of electrons, protons, and heavy ions. The model requires several input parameters. This paper extends the model to include TID effects in the control circuitry by including one additional parameter. Parameters intended to produce conservative risk estimates for the Samsung 8 Gb SLC NAND flash memory are given, subject to some qualifications.

  14. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram"s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program"s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.

  15. Wnt signaling inhibits CTL memory programming

    PubMed Central

    Xiao, Zhengguo; Sun, Zhifeng; Smyth, Kendra; Li, Lei

    2013-01-01

    Induction of functional CTLs is one of the major goals for vaccine development and cancer therapy. Inflammatory cytokines are critical for memory CTL generation. Wnt signaling is important for CTL priming and memory formation, but its role in cytokine-driven memory CTL programming is unclear. We found that wnt signaling inhibited IL-12-driven CTL activation and memory programming. This impaired memory CTL programming was attributed to up-regulation of eomes and down-regulation of T-bet. Wnt signaling suppressed the mTOR pathway during CTL activation, which was different to its effects on other cell types. Interestingly, the impaired memory CTL programming by wnt was partially rescued by mTOR inhibitor rapamycin. In conclusion, we found that crosstalk between wnt and the IL-12 signaling inhibits T-bet and mTOR pathways and impairs memory programming which can be recovered in part by rapamycin. In addition, direct inhibition of wnt signaling during CTL activation does not affect CTL memory programming. Therefore, wnt signaling may serve as a new tool for CTL manipulation in autoimmune diseases and immune therapy for certain cancers. PMID:23911398

  16. The Event-Related Brain Potential as an Index of Information Processing and Cognitive Activity: A Program of Basic Research.

    DTIC Science & Technology

    1988-02-29

    reciprocity: An event- related brain potentials analysis. Acta Psychologica. Submitted for publication. 21. Stolar, N., Sparenborg, S., Donchin, E...in press) argued that it is a manifestation of a process related to the updating of models of the environment or context in working memory. Such an...suggemng " ees ud e may involve working memory, but they do am hold any privileged relation to working memory.u However, he immedi- ately proceeds to narrow

  17. Lattice QCD simulations using the OpenACC platform

    NASA Astrophysics Data System (ADS)

    Majumdar, Pushan

    2016-10-01

    In this article we will explore the OpenACC platform for programming Graphics Processing Units (GPUs). The OpenACC platform offers a directive based programming model for GPUs which avoids the detailed data flow control and memory management necessary in a CUDA programming environment. In the OpenACC model, programs can be written in high level languages with OpenMP like directives. We present some examples of QCD simulation codes using OpenACC and discuss their performance on the Fermi and Kepler GPUs.

  18. Modeling Active Aging and Explicit Memory: An Empirical Study.

    PubMed

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayan Ghosh, Jeff Hammond

    OpenSHMEM is a community effort to unifyt and standardize the SHMEM programming model. MPI (Message Passing Interface) is a well-known community standard for parallel programming using distributed memory. The most recen t release of MPI, version 3.0, was designed in part to support programming models like SHMEM.OSHMPI is an implementation of the OpenSHMEM standard using MPI-3 for the Linux operating system. It is the first implementation of SHMEM over MPI one-sided communication and has the potential to be widely adopted due to the portability and widely availability of Linux and MPI-3. OSHMPI has been tested on a variety of systemsmore » and implementations of MPI-3, includingInfiniBand clusters using MVAPICH2 and SGI shared-memory supercomputers using MPICH. Current support is limited to Linux but may be extended to Apple OSX if there is sufficient interest. The code is opensource via https://github.com/jeffhammond/oshmpi« less

  20. Self-Regulation and Recall: Growth Curve Modeling of Intervention Outcomes for Older Adults

    PubMed Central

    West, Robin L.; Hastings, Erin C.

    2013-01-01

    Memory training has often been supported as a potential means to improve performance for older adults. Less often studied are the characteristics of trainees that benefit most from training. Using a self-regulatory perspective, the current project examined a latent growth curve model to predict training-related gains for middle-aged and older adult trainees from individual differences (e.g., education), information processing skills (strategy use) and self-regulatory factors such as self-efficacy, control, and active engagement in training. For name recall, a model including strategy usage and strategy change as predictors of memory gain, along with self-efficacy and self-efficacy change, showed comparable fit to a more parsimonious model including only self-efficacy variables as predictors. The best fit to the text recall data was a model focusing on self-efficacy change as the main predictor of memory change, and that model showed significantly better fit than a model also including strategy usage variables as predictors. In these models, overall performance was significantly predicted by age and memory self-efficacy, and subsequent training-related gains in performance were best predicted directly by change in self-efficacy (text recall), or indirectly through the impact of active engagement and self-efficacy on gains (name recall). These results underscore the benefits of targeting self-regulatory factors in intervention programs designed to improve memory skills. PMID:21604891

  1. Self-regulation and recall: growth curve modeling of intervention outcomes for older adults.

    PubMed

    West, Robin L; Hastings, Erin C

    2011-12-01

    Memory training has often been supported as a potential means to improve performance for older adults. Less often studied are the characteristics of trainees that benefit most from training. Using a self-regulatory perspective, the current project examined a latent growth curve model to predict training-related gains for middle-aged and older adult trainees from individual differences (e.g., education), information processing skills (strategy use) and self-regulatory factors such as self-efficacy, control, and active engagement in training. For name recall, a model including strategy usage and strategy change as predictors of memory gain, along with self-efficacy and self-efficacy change, showed comparable fit to a more parsimonious model including only self-efficacy variables as predictors. The best fit to the text recall data was a model focusing on self-efficacy change as the main predictor of memory change, and that model showed significantly better fit than a model also including strategy usage variables as predictors. In these models, overall performance was significantly predicted by age and memory self-efficacy, and subsequent training-related gains in performance were best predicted directly by change in self-efficacy (text recall), or indirectly through the impact of active engagement and self-efficacy on gains (name recall). These results underscore the benefits of targeting self-regulatory factors in intervention programs designed to improve memory skills.

  2. [Artificial intelligence meeting neuropsychology. Semantic memory in normal and pathological aging].

    PubMed

    Aimé, Xavier; Charlet, Jean; Maillet, Didier; Belin, Catherine

    2015-03-01

    Artificial intelligence (IA) is the subject of much research, but also many fantasies. It aims to reproduce human intelligence in its learning capacity, knowledge storage and computation. In 2014, the Defense Advanced Research Projects Agency (DARPA) started the restoring active memory (RAM) program that attempt to develop implantable technology to bridge gaps in the injured brain and restore normal memory function to people with memory loss caused by injury or disease. In another IA's field, computational ontologies (a formal and shared conceptualization) try to model knowledge in order to represent a structured and unambiguous meaning of the concepts of a target domain. The aim of these structures is to ensure a consensual understanding of their meaning and a univariant use (the same concept is used by all to categorize the same individuals). The first representations of knowledge in the AI's domain are largely based on model tests of semantic memory. This one, as a component of long-term memory is the memory of words, ideas, concepts. It is the only declarative memory system that resists so remarkably to the effects of age. In contrast, non-specific cognitive changes may decrease the performance of elderly in various events and instead report difficulties of access to semantic representations that affect the semantics stock itself. Some dementias, like semantic dementia and Alzheimer's disease, are linked to alteration of semantic memory. We propose in this paper, using the computational ontologies model, a formal and relatively thin modeling, in the service of neuropsychology: 1) for the practitioner with decision support systems, 2) for the patient as cognitive prosthesis outsourced, and 3) for the researcher to study semantic memory.

  3. Multibit Polycristalline Silicon-Oxide-Silicon Nitride-Oxide-Silicon Memory Cells with High Density Designed Utilizing a Separated Control Gate

    NASA Astrophysics Data System (ADS)

    Rok Kim, Kyeong; You, Joo Hyung; Dal Kwack, Kae; Kim, Tae Whan

    2010-10-01

    Unique multibit NAND polycrystalline silicon-oxide-silicon nitride-oxide-silicon (SONOS) memory cells utilizing a separated control gate (SCG) were designed to increase memory density. The proposed NAND SONOS memory device based on a SCG structure was operated as two bits, resulting in an increase in the storage density of the NVM devices in comparison with conventional single-bit memories. The electrical properties of the SONOS memory cells with a SCG were investigated to clarify the charging effects in the SONOS memory cells. When the program voltage was supplied to each gate of the NAND SONOS flash memory cells, the electrons were trapped in the nitride region of the oxide-nitride-oxide layer under the gate to supply the program voltage. The electrons were accumulated without affecting the other gate during the programming operation, indicating the absence of cross-talk between two trap charge regions. It is expected that the inference effect will be suppressed by the lower program voltage than the program voltage of the conventional NAND flash memory. The simulation results indicate that the proposed unique NAND SONOS memory cells with a SCG can be used to increase memory density.

  4. Program Model Checking: A Practitioner's Guide

    NASA Technical Reports Server (NTRS)

    Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.

    2008-01-01

    Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.

  5. Human Memory Organization for Computer Programs.

    ERIC Educational Resources Information Center

    Norcio, A. F.; Kerst, Stephen M.

    1983-01-01

    Results of study investigating human memory organization in processing of computer programming languages indicate that algorithmic logic segments form a cognitive organizational structure in memory for programs. Statement indentation and internal program documentation did not enhance organizational process of recall of statements in five Fortran…

  6. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  7. Impact of high-κ dielectric and metal nanoparticles in simultaneous enhancement of programming speed and retention time of nano-flash memory

    NASA Astrophysics Data System (ADS)

    Pavel, Akeed A.; Khan, Mehjabeen A.; Kirawanich, Phumin; Islam, N. E.

    2008-10-01

    A methodology to simulate memory structures with metal nanocrystal islands embedded as floating gate in a high-κ dielectric material for simultaneous enhancement of programming speed and retention time is presented. The computational concept is based on a model for charge transport in nano-scaled structures presented earlier, where quantum mechanical tunneling is defined through the wave impedance that is analogous to the transmission line theory. The effects of substrate-tunnel dielectric conduction band offset and metal work function on the tunneling current that determines the programming speed and retention time is demonstrated. Simulation results confirm that a high-κ dielectric material can increase programming current due to its lower conduction band offset with the substrate and also can be effectively integrated with suitable embedded metal nanocrystals having high work function for efficient data retention. A nano-memory cell designed with silver (Ag) nanocrystals embedded in Al 2O 3 has been compared with similar structure consisting of Si nanocrystals in SiO 2 to validate the concept.

  8. Effects of a Memory Training Program in Older People with Severe Memory Loss

    ERIC Educational Resources Information Center

    Mateos, Pedro M.; Valentin, Alberto; González-Tablas, Maria del Mar; Espadas, Verónica; Vera, Juan L.; Jorge, Inmaculada García

    2016-01-01

    Strategies based memory training programs are widely used to enhance the cognitive abilities of the elderly. Participants in these training programs are usually people whose mental abilities remain intact. Occasionally, people with cognitive impairment also participate. The aim of this study was to test if memory training designed specifically for…

  9. Computerized working memory training has positive long-term effect in very low birthweight preschool children.

    PubMed

    Grunewaldt, Kristine Hermansen; Skranes, Jon; Brubakk, Ann-Mari; Lähaugen, Gro C C

    2016-02-01

    Working memory deficits are frequently found in children born preterm and have been linked to learning disabilities, and cognitive and behavioural problems. Our aim was to evaluate if a computerized working memory training program has long-term positive effects on memory, learning, and behaviour in very-low-birthweight (VLBW) children at age 5 to 6 years. This prospective, intervention study included 20 VLBW preschool children in the intervention group and 17 age-matched, non-training VLBW children in the comparison group. The intervention group trained with the Cogmed JM working memory training program daily for 5 weeks (25 training sessions). Extensive neuropsychological assessment and parental questionnaires were performed 4 weeks after intervention and at follow-up 7 months later. For most of the statistical analyses, general linear models were applied. At follow-up, higher scores and increased or equal performance gain were found in the intervention group than the comparison group on memory for faces (p=0.012), narrative memory (p=0.002), and spatial span (p=0.003). No group differences in performance gain were found for attention and behaviour. Computerized working memory training seems to have positive and persisting effects on working memory, and visual and verbal learning, at 7-month follow-up in VLBW preschool children. We speculate that such training is beneficial by improving the ability to learn from the teaching at school and for further cognitive development. © 2015 Mac Keith Press.

  10. Modular data acquisition system and its use in gas-filled detector readout at ESRF

    NASA Astrophysics Data System (ADS)

    Sever, F.; Epaud, F.; Poncet, F.; Grave, M.; Rey-Bakaikoa, V.

    1996-09-01

    Since 1992, 18 ESRF beamlines are open to users. Although the data acquisition requirements vary a lot from one beamline to another, we are trying to implement a modular data acquisition system architecture that would fit with the maximum number of acquisition projects at ESRF. Common to all of these systems are large acquisition memories and the requirement to visualize the data during an acquisition run and to transfer them quickly after the run to safe storage. We developed a general memory API handling the acquisition memory and its organization and another library that provides calls for transferring the data over TCP/IP sockets. Interesting utility programs using these libraries are the `online display' program and the `data transfer' program. The data transfer program as well as an acquisition control program rely on our well-established `device server model', which was originally designed for the machine control system and then successfully reused in beamline control systems. In the second half of this paper, the acquisition system for a 2D gas-filled detector is presented, which is one of the first concrete examples using the proposed modular data acquisition architecture.

  11. Adolescent development, hypothalamic-pituitary-adrenal function, and programming of adult learning and memory.

    PubMed

    McCormick, Cheryl M; Mathews, Iva Z

    2010-06-30

    Chronic exposure to stress is known to affect learning and memory in adults through the release of glucocorticoid hormones by the hypothalamic-pituitary-adrenal (HPA) axis. In adults, glucocorticoids alter synaptic structure and function in brain regions that express high levels of glucocorticoid receptors and that mediate goal-directed behaviour and learning and memory. In contrast to relatively transient effects of stress on cognitive function in adulthood, exposure to high levels of glucocorticoids in early life can produce enduring changes through substantial remodeling of the developing nervous system. Adolescence is another time of significant brain development and maturation of the HPA axis, thereby providing another opportunity for glucocorticoids to exert programming effects on neurocircuitry involved in learning and memory. These topics are reviewed, as is the emerging research evidence in rodent models highlighting that adolescence may be a period of increased vulnerability compared to adulthood in which exposure to high levels of glucocorticoids results in enduring changes in adult cognitive function. Copyright 2009 Elsevier Inc. All rights reserved.

  12. DEAN: A Program for Dynamic Engine Analysis.

    DTIC Science & Technology

    1985-01-01

    hardware and memory limitations. DIGTEM (ref. 4), a recently written code allows steady-state as well as transient calculations to be performed. DIGTEM has...Computer Program for Generating Dynamic Turbofan Engine Models ( DIGTEM )," NASA TM-83446. 5. Carnahan, B., Luther, H.A., and Wilkes, J.O., Applied Numerical

  13. Support of Multidimensional Parallelism in the OpenMP Programming Model

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele

    2003-01-01

    OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.

  14. Scientific Programming Using Java: A Remote Sensing Example

    NASA Technical Reports Server (NTRS)

    Prados, Don; Mohamed, Mohamed A.; Johnson, Michael; Cao, Changyong; Gasser, Jerry

    1999-01-01

    This paper presents results of a project to port remote sensing code from the C programming language to Java. The advantages and disadvantages of using Java versus C as a scientific programming language in remote sensing applications are discussed. Remote sensing applications deal with voluminous data that require effective memory management, such as buffering operations, when processed. Some of these applications also implement complex computational algorithms, such as Fast Fourier Transformation analysis, that are very performance intensive. Factors considered include performance, precision, complexity, rapidity of development, ease of code reuse, ease of maintenance, memory management, and platform independence. Performance of radiometric calibration code written in Java for the graphical user interface and of using C for the domain model are also presented.

  15. Implementing Dementia Care Models in Primary Care Settings: The Aging Brain Care Medical Home (Special Supplement)

    PubMed Central

    Callahan, Christopher M.; Boustani, Malaz A.; Weiner, Michael; Beck, Robin A.; Livin, Lee R.; Kellams, Jeffrey J.; Willis, Deanna R.; Hendrie, Hugh C.

    2010-01-01

    Objectives The purpose of this paper is to describe our experience in implementing a primary care-based dementia and depression care program focused on providing collaborative care for dementia and late-life depression. Methods Capitalizing on the substantial interest in the US on the patient-centered medical home concept, the Aging Brain Care Medical Home targets older adults with dementia and/or late life depression in the primary care setting. We describe a structured set of activities that laid the foundation for a new partnership with the primary care practice and the lessons learned in implementing this new care model. We also provide a description of the core components of this innovative memory care program. Results Findings from three recent randomized clinical trials provided the rationale and basic components for implementing the new memory care program. We used the reflective adaptive process as a relationship building framework that recognizes primary care practices as complex adaptive systems. This framework allows for local adaptation of the protocols and procedures developed in the clinical trials. Tailored care for individual patients is facilitated through a care manager working in collaboration with a primary care physician and supported by specialists in a memory care clinic as well as by information technology resources. Conclusions We have successfully overcome many system-level barriers in implementing a collaborative care program for dementia and depression in primary care. Spontaneous adoption of new models of care is unlikely without specific attention to the complexities and resource constraints of health care systems. PMID:20945236

  16. Programming distributed memory architectures using Kali

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, in part because of the relatively low level of current programming environments for such machines. A new programming environment is presented, Kali, which provides a global name space and allows direct access to remote data values. In order to retain efficiency, Kali provides a system on annotations, allowing the user to control those aspects of the program critical to performance, such as data distribution and load balancing. The primitives and constructs provided by the language is described, and some of the issues raised in translating a Kali program for execution on distributed memory systems are also discussed.

  17. Initial Feasibility and Validity of a Prospective Memory Training Program in a Substance Use Treatment Population

    PubMed Central

    Sweeney, Mary M.; Rass, Olga; Johnson, Patrick S.; Strain, Eric C.; Berry, Meredith S.; Vo, Hoa T.; Fishman, Marc J.; Munro, Cynthia A.; Rebok, George W.; Mintzer, Miriam Z.; Johnson, Matthew W.

    2016-01-01

    Individuals with substance use disorders have shown deficits in the ability to implement future intentions, called prospective memory. Deficits in prospective memory and working memory, a critical underlying component of prospective memory, likely contribute to substance use treatment failures. Thus, improvement of prospective memory and working memory in substance use patients is an innovative target for intervention. We sought to develop a feasible and valid prospective memory training program that incorporates working memory training and may serve as a useful adjunct to substance use disorder treatment. We administered a single session of the novel prospective memory and working memory training program to participants (n = 22; 13 male; 9 female) enrolled in outpatient substance use disorder treatment and correlated performance to existing measures of prospective memory and working memory. Generally accurate prospective memory performance in a single session suggests feasibility in a substance use treatment population. However, training difficulty should be increased to avoid ceiling effects across repeated sessions. Consistent with existing literature, we observed superior performance on event-based relative to time-based prospective memory tasks. Performance on the prospective memory and working memory training components correlated with validated assessments of prospective memory and working memory, respectively. Correlations between novel memory training program performance and established measures suggest that our training engages appropriate cognitive processes. Further, differential event- and time-based prospective memory task performance suggests internal validity of our training. These data support development of this intervention as an adjunctive therapy for substance use disorders. PMID:27690506

  18. Initial feasibility and validity of a prospective memory training program in a substance use treatment population.

    PubMed

    Sweeney, Mary M; Rass, Olga; Johnson, Patrick S; Strain, Eric C; Berry, Meredith S; Vo, Hoa T; Fishman, Marc J; Munro, Cynthia A; Rebok, George W; Mintzer, Miriam Z; Johnson, Matthew W

    2016-10-01

    Individuals with substance use disorders have shown deficits in the ability to implement future intentions, called prospective memory. Deficits in prospective memory and working memory, a critical underlying component of prospective memory, likely contribute to substance use treatment failures. Thus, improvement of prospective memory and working memory in substance use patients is an innovative target for intervention. We sought to develop a feasible and valid prospective memory training program that incorporates working memory training and may serve as a useful adjunct to substance use disorder treatment. We administered a single session of the novel prospective memory and working memory training program to participants (n = 22; 13 men, 9 women) enrolled in outpatient substance use disorder treatment and correlated performance to existing measures of prospective memory and working memory. Generally accurate prospective memory performance in a single session suggests feasibility in a substance use treatment population. However, training difficulty should be increased to avoid ceiling effects across repeated sessions. Consistent with existing literature, we observed superior performance on event-based relative to time-based prospective memory tasks. Performance on the prospective memory and working memory training components correlated with validated assessments of prospective memory and working memory, respectively. Correlations between novel memory training program performance and established measures suggest that our training engages appropriate cognitive processes. Further, differential event- and time-based prospective memory task performance suggests internal validity of our training. These data support the development of this intervention as an adjunctive therapy for substance use disorders. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  20. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  1. YOCAS©® Yoga Reduces Self-reported Memory Difficulty in Cancer Survivors in a Nationwide Randomized Clinical Trial: Investigating Relationships Between Memory and Sleep.

    PubMed

    Janelsins, Michelle C; Peppone, Luke J; Heckler, Charles E; Kesler, Shelli R; Sprod, Lisa K; Atkins, James; Melnik, Marianne; Kamen, Charles; Giguere, Jeffrey; Messino, Michael J; Mohile, Supriya G; Mustian, Karen M

    2016-09-01

    Background Interventions are needed to alleviate memory difficulty in cancer survivors. We previously showed in a phase III randomized clinical trial that YOCAS©® yoga-a program that consists of breathing exercises, postures, and meditation-significantly improved sleep quality in cancer survivors. This study assessed the effects of YOCAS©® on memory and identified relationships between memory and sleep. Survivors were randomized to standard care (SC) or SC with YOCAS©® . 328 participants who provided data on the memory difficulty item of the MD Anderson Symptom Inventory are included. Sleep quality was measured using the Pittsburgh Sleep Quality Index. General linear modeling (GLM) determined the group effect of YOCAS©® on memory difficulty compared with SC. GLM also determined moderation of baseline memory difficulty on postintervention sleep and vice versa. Path modeling assessed the mediating effects of changes in memory difficulty on YOCAS©® changes in sleep and vice versa. YOCAS©® significantly reduced memory difficulty at postintervention compared with SC (mean change: yoga=-0.60; SC=-0.16; P<.05). Baseline memory difficulty did not moderate the effects of postintervention sleep quality in YOCAS©® compared with SC. Baseline sleep quality did moderate the effects of postintervention memory difficulty in YOCAS©® compared with SC (P<.05). Changes in sleep quality was a significant mediator of reduced memory difficulty in YOCAS©® compared with SC (P<.05); however, changes in memory difficulty did not significantly mediate improved sleep quality in YOCAS©® compared with SC. In this large nationwide trial, YOCAS©® yoga significantly reduced patient-reported memory difficulty in cancer survivors. © The Author(s) 2015.

  2. Research about Memory Detection Based on the Embedded Platform

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Chu, Jian

    As is known to us all, the resources of memory detection of the embedded systems are very limited. Taking the Linux-based embedded arm as platform, this article puts forward two efficient memory detection technologies according to the characteristics of the embedded software. Especially for the programs which need specific libraries, the article puts forwards portable memory detection methods to help program designers to reduce human errors,improve programming quality and therefore make better use of the valuable embedded memory resource.

  3. Implementation and performance of parallel Prolog interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, S.; Kale, L.V.; Balkrishna, R.

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  4. A Retrieval Model for Both Recognition and Recall.

    ERIC Educational Resources Information Center

    Gillund, Gary; Shiffrin, Richard M.

    1984-01-01

    The Search of Associative Memory (SAM) model for recall is extended by assuming that a familiarity process is used for recognition. The model, formalized in a computer simulation program, correctly predicts a number of findings in the literature as well as results from an experiment on the word-frequency effect. (Author/BW)

  5. Refining Uses and Gratifications with a Human Information Processing Model.

    ERIC Educational Resources Information Center

    Griffin, Robert J.

    A study was conducted as part of a program to develop and test an individual level communications model. The model proposes that audience members bring to communications situations a set of learned cognitive processing strategies that produce cognitive structural representations of information in memory to facilitate the meeting of the various…

  6. Whitmore, Henschke, and Hilaris: The reorientation of prostate brachytherapy (1970-1987).

    PubMed

    Aronowitz, Jesse N

    2012-01-01

    Urologists had performed prostate brachytherapy for decades before New York's Memorial Hospital retropubic program. This paper explores the contribution of Willet Whitmore, Ulrich Henschke, Basil Hilaris, and Memorial's physicists to the evolution of the procedure. Literature review and interviews with program participants. More than 1000 retropubic implants were performed at Memorial between 1970 and 1987. Unlike previous efforts, Memorial's program benefited from the participation of three disciplines in its conception and execution. Memorial's retropubic program was a collaboration of urologists, radiation therapists, and physicists. Their approach focused greater attention on dosimetry and radiation safety, and served as a template for subsequent prostate brachytherapy programs. Copyright © 2012 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  7. Flash Memory Reliability: Read, Program, and Erase Latency Versus Endurance Cycling

    NASA Technical Reports Server (NTRS)

    Heidecker, Jason

    2010-01-01

    This report documents the efforts and results of the fiscal year (FY) 2010 NASA Electronic Parts and Packaging Program (NEPP) task for nonvolatile memory (NVM) reliability. This year's focus was to measure latency (read, program, and erase) of NAND Flash memories and determine how these parameters drift with erase/program/read endurance cycling.

  8. Relation of Physical Activity to Memory Functioning in Older Adults: The Memory Workout Program.

    ERIC Educational Resources Information Center

    Rebok, George W.; Plude, Dana J.

    2001-01-01

    The Memory Workout, a CD-ROM program designed to help older adults increase changes in physical and cognitive activity influencing memory, was tested with 24 subjects. Results revealed a significant relationship between exercise time, exercise efficacy, and cognitive function, as well as interest in improving memory and physical activity.…

  9. A computational model for simulating text comprehension.

    PubMed

    Lemaire, Benoît; Denhière, Guy; Bellissens, Cédrick; Jhean-Larose, Sandra

    2006-11-01

    In the present article, we outline the architecture of a computer program for simulating the process by which humans comprehend texts. The program is based on psycholinguistic theories about human memory and text comprehension processes, such as the construction-integration model (Kintsch, 1998), the latent semantic analysis theory of knowledge representation (Landauer & Dumais, 1997), and the predication algorithms (Kintsch, 2001; Lemaire & Bianco, 2003), and it is intended to help psycholinguists investigate the way humans comprehend texts.

  10. Single-pass memory system evaluation for multiprogramming workloads

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1990-01-01

    Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.

  11. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  12. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, J.P.; Bangs, A.L.; Butler, P.L.

    Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. Themore » design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.« less

  14. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  15. Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.

  16. System for simultaneously loading program to master computer memory devices and corresponding slave computer memory devices

    NASA Technical Reports Server (NTRS)

    Hall, William A. (Inventor)

    1993-01-01

    A bus programmable slave module card for use in a computer control system is disclosed which comprises a master computer and one or more slave computer modules interfacing by means of a bus. Each slave module includes its own microprocessor, memory, and control program for acting as a single loop controller. The slave card includes a plurality of memory means (S1, S2...) corresponding to a like plurality of memory devices (C1, C2...) in the master computer, for each slave memory means its own communication lines connectable through the bus with memory communication lines of an associated memory device in the master computer, and a one-way electronic door which is switchable to either a closed condition or a one-way open condition. With the door closed, communication lines between master computer memory (C1, C2...) and slave memory (S1, S2...) are blocked. In the one-way open condition invention, the memory communication lines or each slave memory means (S1, S2...) connect with the memory communication lines of its associated memory device (C1, C2...) in the master computer, and the memory devices (C1, C2...) of the master computer and slave card are electrically parallel such that information seen by the master's memory is also seen by the slave's memory. The slave card is also connectable to a switch for electronically removing the slave microprocessor from the system. With the master computer and the slave card in programming mode relationship, and the slave microprocessor electronically removed from the system, loading a program in the memory devices (C1, C2...) of the master accomplishes a parallel loading into the memory devices (S1, S2...) of the slave.

  17. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    DOE PAGES

    Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...

    2015-01-01

    This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less

  18. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less

  19. Schemas in Problem Solving: An Integrated Model of Learning, Memory, and Instruction

    DTIC Science & Technology

    1992-01-01

    article: "Hybrid Computation in Cognitive Science: Neural Networks and Symbols" (J. A. Anderson, 1990). And, Marvin Minsky echoes the sentiment in his...distributed processing: A handbook of models, programs, and exercises. Cambridge, MA: The MIT Press. Minsky , M. (1991). Logical versus analogical or symbolic

  20. Memories as Useful Outcomes of Residential Outdoor Environmental Education

    ERIC Educational Resources Information Center

    Liddicoat, Kendra R.; Krasny, Marianne E.

    2014-01-01

    Residential outdoor environmental education (ROEE) programs for youth have been shown to yield lasting autobiographical episodic memories. This article explores how past program participants have used such memories, and draws on the memory psychology literature to offer a new perspective on the long-term impacts of environmental education.…

  1. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  2. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE PAGES

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...

    2017-03-08

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  3. Increasing dimension of structures by 4D printing shape memory polymers via fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Hu, G. F.; Damanpack, A. R.; Bodaghi, M.; Liao, W. H.

    2017-12-01

    The main objective of this paper is to introduce a 4D printing method to program shape memory polymers (SMPs) during fabrication process. Fused deposition modeling (FDM) as a filament-based printing method is employed to program SMPs during depositing the material. This method is implemented to fabricate complicated polymeric structures by self-bending features without need of any post-programming. Experiments are conducted to demonstrate feasibility of one-dimensional (1D)-to 2D and 2D-to-3D self-bending. It is shown that 3D printed plate structures can transform into masonry-inspired 3D curved shell structures by simply heating. Good reliability of SMP programming during printing process is also demonstrated. A 3D macroscopic constitutive model is established to simulate thermo-mechanical features of the printed SMPs. Governing equations are also derived to simulate programming mechanism during printing process and shape change of self-bending structures. In this respect, a finite element formulation is developed considering von-Kármán geometric nonlinearity and solved by implementing iterative Newton-Raphson scheme. The accuracy of the computational approach is checked with experimental results. It is demonstrated that the theoretical model is able to replicate the main characteristics observed in the experiments. This research is likely to advance the state of the art FDM 4D printing, and provide pertinent results and computational tool that are instrumental in design of smart materials and structures with self-bending features.

  4. CoMD Implementation Suite in Emerging Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque, Riyaz; Reeve, Sam; Juallmes, Luc

    CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecular dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended tomore » support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.« less

  5. Testing New Programming Paradigms with NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.

  6. Programming and memory dynamics of innate leukocytes during tissue homeostasis and inflammation.

    PubMed

    Lee, Christina; Geng, Shuo; Zhang, Yao; Rahtes, Allison; Li, Liwu

    2017-09-01

    The field of innate immunity is witnessing a paradigm shift regarding "memory" and "programming" dynamics. Past studies of innate leukocytes characterized them as first responders to danger signals with no memory. However, recent findings suggest that innate leukocytes, such as monocytes and neutrophils, are capable of "memorizing" not only the chemical nature but also the history and dosages of external stimulants. As a consequence, innate leukocytes can be dynamically programmed or reprogrammed into complex inflammatory memory states. Key examples of innate leukocyte memory dynamics include the development of primed and tolerant monocytes when "programmed" with a variety of inflammatory stimulants at varying signal strengths. The development of innate leukocyte memory may have far-reaching translational implications, as programmed innate leukocytes may affect the pathogenesis of both acute and chronic inflammatory diseases. This review intends to critically discuss some of the recent studies that address this emerging concept and its implication in the pathogenesis of inflammatory diseases. © Society for Leukocyte Biology.

  7. Honoring our donors: a survey of memorial ceremonies in United States anatomy programs.

    PubMed

    Jones, Trahern W; Lachman, Nirusha; Pawlina, Wojciech

    2014-01-01

    Many anatomy programs that incorporate dissection of donated human bodies hold memorial ceremonies of gratitude towards body donors. The content of these ceremonies may include learners' reflections on mortality, respect, altruism, and personal growth told through various humanities modalities. The task of planning is usually student- and faculty-led with participation from other health care students. Objective information on current memorial ceremonies for body donors in anatomy programs in the United States appears to be lacking. The number of programs in the United States that currently plan these memorial ceremonies and information on trends in programs undertaking such ceremonies remain unknown. Gross anatomy program directors throughout the United States were contacted and asked to respond to a voluntary questionnaire on memorial ceremonies held at their institution. The results (response rate 68.2%) indicated that a majority of human anatomy programs (95.5%) hold memorial ceremonies. These ceremonies are, for the most part, student-driven and nondenominational or secular in nature. Participants heavily rely upon speech, music, poetry, and written essays, with a small inclusion of other humanities modalities, such as dance or visual art, to explore a variety of themes during these ceremonies. © 2013 American Association of Anatomists.

  8. The iconic memory skills of brain injury survivors and non-brain injured controls after visual scanning training.

    PubMed

    McClure, J T; Browning, R T; Vantrease, C M; Bittle, S T

    1994-01-01

    Previous research suggests that traumatic brain injury (TBI) results in impairment of iconic memory abilities.We would like to acknowledge the contribution of Jeffrey D. Vantrease, who wrote the software program for the Iconic Memory procedure and measurement. This raises serious implications for brain injury rehabilitation. Most cognitive rehabilitation programs do not include iconic memory training. Instead it is common for cognitive rehabilitation programs to focus on attention and concentration skills, memory skills, and visual scanning skills.This study compared the iconic memory skills of brain-injury survivors and control subjects who all reached criterion levels of visual scanning skills. This involved previous training for the brain-injury survivors using popular visual scanning programs that allowed them to visually scan with response time and accuracy within normal limits. Control subjects required only minimal training to reach normal limits criteria. This comparison allows for the dissociation of visual scanning skills and iconic memory skills.The results are discussed in terms of their implications for cognitive rehabilitation and the relationship between visual scanning training and iconic memory skills.

  9. [Cortical potentials evoked to response to a signal to make a memory-guided saccade].

    PubMed

    Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V

    2010-01-01

    The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.

  10. CLOCS (Computer with Low Context-Switching Time) Architecture Reference Documents

    DTIC Science & Technology

    1988-05-06

    Peculiarities The only state inside the central processing unit(CPU) is a program status word. All data operations are memory to memory. One result of this... to the challenge "if I whore to design RISC, this is how I would do it." The architecture was designed by Mark Davis and Bill Gallmeister. 1.2...are memory to memory. Any special devices added should be memory mapped. The program counter is even memory mapped. 1.3.1 Working storage There is no

  11. Chemically programmed ink-jet printed resistive WORM memory array and readout circuit

    NASA Astrophysics Data System (ADS)

    Andersson, H.; Manuilskiy, A.; Sidén, J.; Gao, J.; Hummelgård, M.; Kunninmel, G. V.; Nilsson, H.-E.

    2014-09-01

    In this paper an ink-jet printed write once read many (WORM) resistive memory fabricated on paper substrate is presented. The memory elements are programmed for different resistance states by printing triethylene glycol monoethyl ether on the substrate before the actual memory element is printed using silver nano particle ink. The resistance is thus able to be set to a broad range of values without changing the geometry of the elements. A memory card consisting of 16 elements is manufactured for which the elements are each programmed to one of four defined logic levels, providing a total of 4294 967 296 unique possible combinations. Using a readout circuit, originally developed for resistive sensors to avoid crosstalk between elements, a memory card reader is manufactured that is able to read the values of the memory card and transfer the data to a PC. Such printed memory cards can be used in various applications.

  12. Reagent-Free Programming of Shape-Memory Behavior in Gelatin by Electron Beams: Experiments and Modeling

    NASA Astrophysics Data System (ADS)

    Riedel, Stefanie; Mayr, Stefan G.

    2018-02-01

    Recent years have seen a paradigm shift in biomaterials toward stimuli-responsive switchable systems that actively interact with their environment. This work demonstrates how to turn the ubiquitous off-the-shelf material gelatin into such a smart biomaterial. This is achieved by realizing the shape-memory effect, viz., a temperature-induced transition from a secondary into a primary shape that has been programmed in the first place merely by exposure to energetic electrons without addition of potentially hazardous cross-linkers. While this scenario is experimentally quantified for exemplary actuators, a theoretical framework capable of unraveling the molecular foundations and predicting experiments is also presented. It particularly employs molecular dynamics modeling based on force fields that are also derived within this work. Implementing this functionality into a highly accepted material, these findings open an avenue for large-scale application in a broad range of areas.

  13. Parallel algorithms for modeling flow in permeable media. Annual report, February 15, 1995 - February 14, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.A. Pope; K. Sephernoori; D.C. McKinney

    1996-03-15

    This report describes the application of distributed-memory parallel programming techniques to a compositional simulator called UTCHEM. The University of Texas Chemical Flooding reservoir simulator (UTCHEM) is a general-purpose vectorized chemical flooding simulator that models the transport of chemical species in three-dimensional, multiphase flow through permeable media. The parallel version of UTCHEM addresses solving large-scale problems by reducing the amount of time that is required to obtain the solution as well as providing a flexible and portable programming environment. In this work, the original parallel version of UTCHEM was modified and ported to CRAY T3D and CRAY T3E, distributed-memory, multiprocessor computersmore » using CRAY-PVM as the interprocessor communication library. Also, the data communication routines were modified such that the portability of the original code across different computer architectures was mad possible.« less

  14. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  15. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  16. Working Memory From the Psychological and Neurosciences Perspectives: A Review.

    PubMed

    Chai, Wen Jia; Abd Hamid, Aini Ismafairus; Abdullah, Jafri Malin

    2018-01-01

    Since the concept of working memory was introduced over 50 years ago, different schools of thought have offered different definitions for working memory based on the various cognitive domains that it encompasses. The general consensus regarding working memory supports the idea that working memory is extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. Before the emergence of other competing models, the concept of working memory was described by the multicomponent working memory model proposed by Baddeley and Hitch. In the present article, the authors provide an overview of several working memory-relevant studies in order to harmonize the findings of working memory from the neurosciences and psychological standpoints, especially after citing evidence from past studies of healthy, aging, diseased, and/or lesioned brains. In particular, the theoretical framework behind working memory, in which the related domains that are considered to play a part in different frameworks (such as memory's capacity limit and temporary storage) are presented and discussed. From the neuroscience perspective, it has been established that working memory activates the fronto-parietal brain regions, including the prefrontal, cingulate, and parietal cortices. Recent studies have subsequently implicated the roles of subcortical regions (such as the midbrain and cerebellum) in working memory. Aging also appears to have modulatory effects on working memory; age interactions with emotion, caffeine and hormones appear to affect working memory performances at the neurobiological level. Moreover, working memory deficits are apparent in older individuals, who are susceptible to cognitive deterioration. Another younger population with working memory impairment consists of those with mental, developmental, and/or neurological disorders such as major depressive disorder and others. A less coherent and organized neural pattern has been consistently reported in these disadvantaged groups. Working memory of patients with traumatic brain injury was similarly affected and shown to have unusual neural activity (hyper- or hypoactivation) as a general observation. Decoding the underlying neural mechanisms of working memory helps support the current theoretical understandings concerning working memory, and at the same time provides insights into rehabilitation programs that target working memory impairments from neurophysiological or psychological aspects.

  17. Modeling and simulation of floating gate nanocrystal FET devices and circuits

    NASA Astrophysics Data System (ADS)

    Hasaneen, El-Sayed A. M.

    The nonvolatile memory market has been growing very fast during the last decade, especially for mobile communication systems. The Semiconductor Industry Association International Technology Roadmap for Semiconductors states that the difficult challenge for nonvolatile semiconductor memories is to achieve reliable, low power, low voltage performance and high-speed write/erase. This can be achieved by aggressive scaling of the nonvolatile memory cells. Unfortunately, scaling down of conventional nonvolatile memory will further degrade the retention time due to the charge loss between the floating gate and drain/source contacts and substrate which makes conventional nonvolatile memory unattractive. Using nanocrystals as charge storage sites reduces dramatically the charge leakage through oxide defects and drain/source contacts. Floating gate nanocrystal nonvolatile memory, FG-NCNVM, is a candidate for future memory because it is advantageous in terms of high-speed write/erase, small size, good scalability, low-voltage, low-power applications, and the capability to store multiple bits per cell. Many studies regarding FG-NCNVMs have been published. Most of them have dealt with fabrication improvements of the devices and device characterizations. Due to the promising FG-NCNVM applications in integrated circuits, there is a need for circuit a simulation model to simulate the electrical characteristics of the floating gate devices. In this thesis, a FG-NCNVM circuit simulation model has been proposed. It is based on the SPICE BSIM simulation model. This model simulates the cell behavior during normal operation. Model validation results have been presented. The SPICE model shows good agreement with experimental results. Current-voltage characteristics, transconductance and unity gain frequency (fT) have been studied showing the effect of the threshold voltage shift (DeltaVth) due to nanocrystal charge on the device characteristics. The threshold voltage shift due to nanocrystal charge has a strong effect on the memory characteristics. Also, the programming operation of the memory cell has been investigated. The tunneling rate from quantum well channel to quantum dot (nanocrystal) gate is calculated. The calculations include various memory parameters, wavefunctions, and energies of quantum well channel and quantum dot gate. The use of floating gate nanocrystal memory as a transistor with a programmable threshold voltage has been demonstrated. The incorporation of FG-NCFETs to design programmable integrated circuit building blocks has been discussed. This includes the design of programmable current and voltage reference circuits. Finally, we demonstrated the design of tunable gain op-amp incorporating FG-NCFETs. Programmable integrated circuit building blocks can be used in intelligent analog and digital systems.

  18. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  19. Makalu: fast recoverable allocation of non-volatile memory

    DOE PAGES

    Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.

    2016-10-19

    Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less

  20. Makalu: fast recoverable allocation of non-volatile memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.

    Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less

  1. Thermomechanical behavior of a two-way shape memory composite actuator

    NASA Astrophysics Data System (ADS)

    Ge, Qi; Westbrook, Kristofer K.; Mather, Patrick T.; Dunn, Martin L.; Qi, H. Jerry

    2013-05-01

    Shape memory polymers (SMPs) are a class of smart materials that can fix a temporary shape and recover to their permanent (original) shape in response to an environmental stimulus such as heat, electricity, or irradiation, among others. Most SMPs developed in the past can only demonstrate the so-called one-way shape memory effect; i.e., one programming step can only yield one shape memory cycle. Recently, one of the authors (Mather) developed a SMP that exhibits both one-way shape memory (1W-SM) and two-way shape memory (2W-SM) effects (with the assistance of an external load). This SMP was further used to develop a free-standing composite actuator with a nonlinear reversible actuation under thermal cycling. In this paper, a theoretical model for the PCO SMP based composite actuator was developed to investigate its thermomechanical behavior and the mechanisms for the observed phenomena during the actuation cycles, and to provide insight into how to improve the design.

  2. Cognitive stimulation in healthy older adults: a cognitive stimulation program using leisure activities compared to a conventional cognitive stimulation program.

    PubMed

    Grimaud, Élisabeth; Taconnat, Laurence; Clarys, David

    2017-06-01

    The aim of this study was to compare two methods of cognitive stimulation for the cognitive functions. The first method used an usual approach, the second used leisure activities in order to assess their benefits on cognitive functions (speed of processing; working memory capacity and executive functions) and psychoaffective measures (memory span and self esteem). 67 participants over 60 years old took part in the experiment. They were divided into three groups: 1 group followed a program of conventional cognitive stimulation, 1 group a program of cognitive stimulation using leisure activities and 1 control group. The different measures have been evaluated before and after the training program. Results show that the cognitive stimulation program using leisure activities is as effective on memory span, updating and memory self-perception as the program using conventional cognitive stimulation, and more effective on self-esteem than the conventional program. There is no difference between the two stimulated groups and the control group on speed of processing. Neither of the two cognitive stimulation programs provides a benefit over shifting and inhibition. These results indicate that it seems to be possible to enhance working memory and to observe far transfer benefits over self-perception (self-esteem and memory self-perception) when using leisure activities as a tool for cognitive stimulation.

  3. Rosalie Wolf Memorial Lecture: A logic model to measure the impacts of World Elder Abuse Awareness Day.

    PubMed

    Stein, Karen

    2016-01-01

    This commentary discusses the need to evaluate the impact of World Elder Abuse Awareness Day activities, the elder abuse field's most sustained public awareness initiative. A logic model is proposed with measures for short-term, medium-term, and long-term outcomes for community-based programs.

  4. Schemas in Problem Solving: An Integrated Model of Learning, Memory, and Instruction

    DTIC Science & Technology

    1992-01-01

    reflected in the title of a recent article: "lybid Coupation, in Cognitive Science: Neural Networks ad Symbl (3. A Andesson, 1990). And, Marvin Mtuky...Rumneihart, D. E (1989). Explorations in parallel distributed processing: A handbook of models, programs, and exercises. Cambridge, MA: The MrT Press. Minsky

  5. Calibration and Finite Element Implementation of an Energy-Based Material Model for Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Hackl, Klaus

    2016-09-01

    Numerical simulations are a powerful tool to analyze the complex thermo-mechanically coupled material behavior of shape memory alloys during product engineering. The benefit of the simulations strongly depends on the quality of the underlying material model. In this contribution, we discuss a variational approach which is based solely on energetic considerations and demonstrate that unique calibration of such a model is sufficient to predict the material behavior at varying ambient temperature. In the beginning, we recall the necessary equations of the material model and explain the fundamental idea. Afterwards, we focus on the numerical implementation and provide all information that is needed for programing. Then, we show two different ways to calibrate the model and discuss the results. Furthermore, we show how this model is used during real-life industrial product engineering.

  6. Scalable PGAS Metadata Management on Extreme Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less

  7. Working Memory From the Psychological and Neurosciences Perspectives: A Review

    PubMed Central

    Chai, Wen Jia; Abd Hamid, Aini Ismafairus; Abdullah, Jafri Malin

    2018-01-01

    Since the concept of working memory was introduced over 50 years ago, different schools of thought have offered different definitions for working memory based on the various cognitive domains that it encompasses. The general consensus regarding working memory supports the idea that working memory is extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. Before the emergence of other competing models, the concept of working memory was described by the multicomponent working memory model proposed by Baddeley and Hitch. In the present article, the authors provide an overview of several working memory-relevant studies in order to harmonize the findings of working memory from the neurosciences and psychological standpoints, especially after citing evidence from past studies of healthy, aging, diseased, and/or lesioned brains. In particular, the theoretical framework behind working memory, in which the related domains that are considered to play a part in different frameworks (such as memory’s capacity limit and temporary storage) are presented and discussed. From the neuroscience perspective, it has been established that working memory activates the fronto-parietal brain regions, including the prefrontal, cingulate, and parietal cortices. Recent studies have subsequently implicated the roles of subcortical regions (such as the midbrain and cerebellum) in working memory. Aging also appears to have modulatory effects on working memory; age interactions with emotion, caffeine and hormones appear to affect working memory performances at the neurobiological level. Moreover, working memory deficits are apparent in older individuals, who are susceptible to cognitive deterioration. Another younger population with working memory impairment consists of those with mental, developmental, and/or neurological disorders such as major depressive disorder and others. A less coherent and organized neural pattern has been consistently reported in these disadvantaged groups. Working memory of patients with traumatic brain injury was similarly affected and shown to have unusual neural activity (hyper- or hypoactivation) as a general observation. Decoding the underlying neural mechanisms of working memory helps support the current theoretical understandings concerning working memory, and at the same time provides insights into rehabilitation programs that target working memory impairments from neurophysiological or psychological aspects. PMID:29636715

  8. Endurance degradation and lifetime model of p-channel floating gate flash memory device with 2T structure

    NASA Astrophysics Data System (ADS)

    Wei, Jiaxing; Liu, Siyang; Liu, Xiaoqiang; Sun, Weifeng; Liu, Yuwei; Liu, Xiaohong; Hou, Bo

    2017-08-01

    The endurance degradation mechanisms of p-channel floating gate flash memory device with two-transistor (2T) structure are investigated in detail in this work. With the help of charge pumping (CP) measurements and Sentaurus TCAD simulations, the damages in the drain overlap region along the tunnel oxide interface caused by band-to-band (BTB) tunneling programming and the damages in the channel region resulted from Fowler-Nordheim (FN) tunneling erasure are verified respectively. Furthermore, the lifetime model of endurance characteristic is extracted, which can extrapolate the endurance degradation tendency and predict the lifetime of the device.

  9. Dynamic modulation of innate immunity programming and memory.

    PubMed

    Yuan, Ruoxi; Li, Liwu

    2016-01-01

    Recent progress harkens back to the old theme of immune memory, except this time in the area of innate immunity, to which traditional paradigm only prescribes a rudimentary first-line defense function with no memory. However, both in vitro and in vivo studies reveal that innate leukocytes may adopt distinct activation states such as priming, tolerance, and exhaustion, depending upon the history of prior challenges. The dynamic programming and potential memory of innate leukocytes may have far-reaching consequences in health and disease. This review aims to provide some salient features of innate programing and memory, patho-physiological consequences, underlying mechanisms, and current pressing issues.

  10. Machine Learning Feature Selection for Tuning Memory Page Swapping

    DTIC Science & Technology

    2013-09-01

    environments we set up. 13 Figure 4.1 Updated Feature Vector List. Features we added to the kernel are anno - tated with “(MLVM...Feb. 1966. [2] P. J . Denning, “The working set model for program behavior,” Communications of the ACM, vol. 11, no. 5, pp. 323–333, May 1968. [3] L. A...8] R. W. Cart and J . L. Hennessy, “WSClock — A simple and effective algorithm for virtual memory management,” M.S. thesis, Dept. Computer Science

  11. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  12. The Aging Well through Interaction and Scientific Education (AgeWISE) Program.

    PubMed

    O'Connor, Maureen K; Kraft, Malissa L; Daley, Ryan; Sugarman, Michael A; Clark, Erika L; Scoglio, Arielle A J; Shirk, Steven D

    2017-12-08

    We conducted a randomized controlled trial of the Aging Well through Interaction and Scientific Education (AgeWISE) program, a 12-week manualized cognitive rehabilitation program designed to provide psychoeducation to older adults about the aging brain, lifestyle factors associated with successful brain aging, and strategies to compensate for age related cognitive decline. Forty-nine cognitively intact participants ≥ 60 years old were randomly assigned to the AgeWISE program (n = 25) or a no-treatment control group (n = 24). Questionnaire data were collected prior to group assignment and post intervention. Two-factor repeated-measures analyses of covariance (ANCOVAs) were used to compare group outcomes. Upon completion, participants in the AgeWISE program reported increases in memory contentment and their sense of control in improving memory; no significant changes were observed in the control group. Surprisingly, participation in the group was not associated with significant changes in knowledge of memory aging, perception of memory ability, or greater use of strategies. The AgeWISE program was successfully implemented and increased participants' memory contentment and their sense of control in improving memory in advancing age. This study supports the use of AgeWISE to improve perspectives on healthy cognitive aging.

  13. Tuning collective communication for Partitioned Global Address Space programming models

    DOE PAGES

    Nishtala, Rajesh; Zheng, Yili; Hargrove, Paul H.; ...

    2011-06-12

    Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues thatmore » are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.« less

  14. User-Assisted Store Recycling for Dynamic Task Graph Schedulers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan

    The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less

  15. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  16. Effects of a Memory and Visual-Motor Integration Program for Older Adults Based on Self-Efficacy Theory.

    PubMed

    Kim, Eun Hwi; Suh, Soon Rim

    2017-06-01

    This study was conducted to verify the effects of a memory and visual-motor integration program for older adults based on self-efficacy theory. A non-equivalent control group pretest-posttest design was implemented in this quasi-experimental study. The participants were 62 older adults from senior centers and older adult welfare facilities in D and G city (Experimental group=30, Control group=32). The experimental group took part in a 12-session memory and visual-motor integration program over 6 weeks. Data regarding memory self-efficacy, memory, visual-motor integration, and depression were collected from July to October of 2014 and analyzed with independent t-test and Mann-Whitney U test using PASW Statistics (SPSS) 18.0 to determine the effects of the interventions. Memory self-efficacy (t=2.20, p=.031), memory (Z=-2.92, p=.004), and visual-motor integration (Z=-2.49, p=.013) increased significantly in the experimental group as compared to the control group. However, depression (Z=-0.90, p=.367) did not decrease significantly. This program is effective for increasing memory, visual-motor integration, and memory self-efficacy in older adults. Therefore, it can be used to improve cognition and prevent dementia in older adults. © 2017 Korean Society of Nursing Science

  17. Virtual reality-based prospective memory training program for people with acquired brain injury.

    PubMed

    Yip, Ben C B; Man, David W K

    2013-01-01

    Acquired brain injuries (ABI) may display cognitive impairments and lead to long-term disabilities including prospective memory (PM) failure. Prospective memory serves to remember to execute an intended action in the future. PM problems would be a challenge to an ABI patient's successful community reintegration. While retrospective memory (RM) has been extensively studied, treatment programs for prospective memory are rarely reported. The development of a treatment program for PM, which is considered timely, can be cost-effective and appropriate to the patient's environment. A 12-session virtual reality (VR)-based cognitive rehabilitation program was developed using everyday PM activities as training content. 37 subjects were recruited to participate in a pretest-posttest control experimental study to evaluate its treatment effectiveness. Results suggest that significantly better changes were seen in both VR-based and real-life PM outcome measures, related cognitive attributes such as frontal lobe functions and semantic fluency. VR-based training may be well accepted by ABI patients as encouraging improvement has been shown. Large-scale studies of a virtual reality-based prospective memory (VRPM) training program are indicated.

  18. The Memory Fitness Program: Cognitive Effects of a Healthy Aging Intervention

    PubMed Central

    Miller, Karen J.; Siddarth, Prabha; Gaines, Jean M.; Parrish, John M.; Ercoli, Linda M.; Marx, Katherine; Ronch, Judah; Pilgram, Barbara; Burke, Kasey; Barczak, Nancy; Babcock, Bridget; Small, Gary W.

    2014-01-01

    Context Age-related memory decline affects a large proportion of older adults. Cognitive training, physical exercise, and other lifestyle habits may help to minimize self-perception of memory loss and a decline in objective memory performance. Objective The purpose of this study was to determine whether a 6-week educational program on memory training, physical activity, stress reduction, and healthy diet led to improved memory performance in older adults. Design A convenience sample of 115 participants (mean age: 80.9 [SD: 6.0 years]) was recruited from two continuing care retirement communities. The intervention consisted of 60-minute classes held twice weekly with 15–20 participants per class. Testing of both objective and subjective cognitive performance occurred at baseline, preintervention, and postintervention. Objective cognitive measures evaluated changes in five domains: immediate verbal memory, delayed verbal memory, retention of verbal information, memory recognition, and verbal fluency. A standardized metamemory instrument assessed four domains of memory self-awareness: frequency and severity of forgetting, retrospective functioning, and mnemonics use. Results The intervention program resulted in significant improvements on objective measures of memory, including recognition of word pairs (t[114] = 3.62, p < 0.001) and retention of verbal information from list learning (t[114] = 2.98, p < 0.01). No improvement was found for verbal fluency. Regarding subjective memory measures, the retrospective functioning score increased significantly following the intervention (t[114] = 4.54, p < 0.0001), indicating perception of a better memory. Conclusions These findings indicate that a 6-week healthy lifestyle program can improve both encoding and recalling of new verbal information, as well as self-perception of memory ability in older adults residing in continuing care retirement communities. PMID:21765343

  19. Working Memory Training for Children with Cochlear Implants: A Pilot Study

    ERIC Educational Resources Information Center

    Kronenberger, William G.; Pisoni, David B.; Henning, Shirley C.; Colson, Bethany G.; Hazzard, Lindsey M.

    2011-01-01

    Purpose: This study investigated the feasibility and efficacy of a working memory training program for improving memory and language skills in a sample of 9 children who are deaf (age 7-15 years) with cochlear implants (CIs). Method: All children completed the Cogmed Working Memory Training program on a home computer over a 5-week period.…

  20. The Respiratory Environment Diverts the Development of Antiviral Memory CD8 T Cells.

    PubMed

    Shane, Hillary L; Reagin, Katie L; Klonowski, Kimberly D

    2018-06-01

    Our understanding of memory CD8 + T cells has been largely derived from acute, systemic infection models. However, memory CD8 + T cells generated from mucosal infection exhibit unique properties and, following respiratory infection, are not maintained in the lung long term. To better understand how infection route modifies memory differentiation, we compared murine CD8 + T cell responses to a vesicular stomatitis virus (VSV) challenge generated intranasally (i.n.) or i.v. The i.n. infection resulted in greater peak expansion of VSV-specific CD8 + T cells. However, this numerical advantage was rapidly lost during the contraction phase of the immune response, resulting in memory CD8 + T cell numerical deficiencies when compared with i.v. infection. Interestingly, the antiviral CD8 + T cells generated in response to i.n. VSV exhibited a biased and sustained proportion of early effector cells (CD127 lo KLRG1 lo ) akin to the developmental program favored after i.n. influenza infection, suggesting that respiratory infection broadly favors an incomplete memory differentiation program. Correspondingly, i.n. VSV infection resulted in lower CD122 expression and eomesodermin levels by VSV-specific CD8 + T cells, further indicative of an inferior transition to bona fide memory. These results may be due to distinct (CD103 + CD11b + ) dendritic cell subsets in the i.n. versus i.v. T cell priming environments, which express molecules that regulate T cell signaling and the balance between tolerance and immunity. Therefore, we propose that distinct immunization routes modulate both the quality and quantity of antiviral effector and memory CD8 + T cells in response to an identical pathogen and should be considered in CD8 + T cell-based vaccine design. Copyright © 2018 by The American Association of Immunologists, Inc.

  1. Bermuda Triangle: a subsystem of the 168/E interfacing scheme used by Group B at SLAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxoby, G.J.; Levinson, L.J.; Trang, Q.H.

    1979-12-01

    The Bermuda Triangle system is a method of interfacing several 168/E microprocessors to a central system for control of the processors and overlaying their memories. The system is a three-way interface with I/O ports to a large buffer memory, a PDP11 Unibus and a bus to the 168/E processors. Data may be transferred bidirectionally between any two ports. Two Bermuda Triangles are used, one for the program memory and one for the data memory. The program buffer memory stores the overlay programs for the 168/E, and the data buffer memory, the incoming raw data, the data portion of the overlays,more » and the outgoing processed events. This buffering is necessary since the memories of 168/E microprocessors are small compared to the main program and the amount of data being processed. The link to the computer facility is via a Unibus to IBM channel interface. A PDP11/04 controls the data flow. 7 figures, 4 tables. (RWR)« less

  2. Guidance system operations plan for manned CSM earth orbital and lunar missions using program COLOSSUS 3. Section 7: Erasable memory programs

    NASA Technical Reports Server (NTRS)

    Hamilton, M. H.

    1972-01-01

    Erasable-memory programs designed for guidance computers used in command and lunar modules are presented. The purpose, functional description, assumptions, restrictions, and imitations are given for each program.

  3. Apollo guidance, navigation and control: Guidance system operations plans for manned LM earth orbital and lunar missions using Program COLOSSUS 3. Section 7: Erasable memory programs

    NASA Technical Reports Server (NTRS)

    Hamilton, M. H.

    1972-01-01

    Erasable-memory programs (EMPs) designed for the guidance computers used in the command (CMC) and lunar modules (LGC) are described. CMC programs are designated COLOSSUS 3, and the associated EMPs are identified by a three-digit number beginning with 5. LGC programs are designated LUMINARY 1E, and the associated EMPs are identified, with one exception, by a three-digit number beginning with 1. The exception is EMP 99. The EMPs vary in complexity from a simple flagbit setting to a long and intricate logical structure. They all, however, cause the computer to behave in a way not intended in the original design of the programs; they accomplish this off-nominal behavior by some alteration of erasable memory to interface with existing fixed-memory programs to effect a desired result.

  4. The efficacy of a multifactorial memory training in older adults living in residential care settings.

    PubMed

    Vranić, Andrea; Španić, Ana Marija; Carretti, Barbara; Borella, Erika

    2013-11-01

    Several studies have shown an increase in memory performance after teaching mnemonic techniques to older participants. However, transfer effects to non-trained tasks are generally either very small, or not found. The present study investigates the efficacy of a multifactorial memory training program for older adults living in a residential care center. The program combines teaching of memory strategies with activities based on metacognitive (metamemory) and motivational aspects. Specific training-related gains in the Immediate list recall task (criterion task), as well as transfer effects on measures of short-term memory, long-term memory, working memory, motivational (need for cognition), and metacognitive aspects (subjective measure of one's memory) were examined. Maintenance of training benefits was assessed after seven months. Fifty-one older adults living in a residential care center, with no cognitive impairments, participated in the study. Participants were randomly assigned to two programs: the experimental group attended the training program, while the active control group was involved in a program in which different psychological issues were discussed. A benefit in the criterion task and substantial general transfer effects were found for the trained group, but not for the active control, and they were maintained at the seven months follow-up. Our results suggest that training procedures, which combine teaching of strategies with metacognitive-motivational aspects, can improve cognitive functioning and attitude toward cognitive activities in older adults.

  5. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-09

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.

  6. Improved Air Combat Awareness; with AESA and Next-Generation Signal Processing

    DTIC Science & Technology

    2002-09-01

    competence network Building techniques Software development environment Communication Computer architecture Modeling Real-time programming Radar...memory access, skewed load and store, 3.2 GB/s BW • Performance: 400 MFLOPS Runtime environment Custom runtime routines Driver routines Hardware

  7. RSM 1.0 - A RESUPPLY SCHEDULER USING INTEGER OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    RSM, Resupply Scheduling Modeler, is a fully menu-driven program that uses integer programming techniques to determine an optimum schedule for replacing components on or before the end of a fixed replacement period. Although written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user-defined resource constraints. RSM is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more computationally intensive, integer programming was required for accuracy when modeling systems with small quantities of components. Input values for component life cane be real numbers, RSM converts them to integers by dividing the lifetime by the period duration, then reducing the result to the next lowest integer. For each component, there is a set of constraints that insure that it is replaced before its lifetime expires. RSM includes user-defined constraints such as transportation mass and volume limits, as well as component life, available repair crew time and assembly sequences. A weighting factor allows the program to minimize factors such as cost. The program then performs an iterative analysis, which is displayed during the processing. A message gives the first period in which resources are being exceeded on each iteration. If the scheduling problem is unfeasible, the final message will also indicate the first period in which resources were exceeded. RSM is written in APL2 for IBM PC series computers and compatibles. A stand-alone executable version of RSM is provided; however, this is a "packed" version of RSM which can only utilize the memory within the 640K DOS limit. This executable requires at least 640K of memory and DOS 3.1 or higher. Source code for an APL2/PC workspace version is also provided. This version of RSM can make full use of any installed extended memory but must be run with the APL2 interpreter; and it requires an 80486 based microcomputer or an 80386 based microcomputer with an 80387 math coprocessor, at least 2Mb of extended memory, and DOS 3.3 or higher. The standard distribution medium for this package is one 5.25 inch 360K MS-DOS format diskette. RSM was developed in 1991. APL2 and IBM PC are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  8. Residual stresses in injection molded shape memory polymer parts

    NASA Astrophysics Data System (ADS)

    Katmer, Sukran; Esen, Huseyin; Karatas, Cetin

    2016-03-01

    Shape memory polymers (SMPs) are materials which have shape memory effect (SME). SME is a property which has the ability to change shape when induced by a stimulator such as temperature, moisture, pH, electric current, magnetic field, light, etc. A process, known as programming, is applied to SMP parts in order to alter them from their permanent shape to their temporary shape. In this study we investigated effects of injection molding and programming processes on residual stresses in molded thermoplastic polyurethane shape memory polymer, experimentally. The residual stresses were measured by layer removal method. The study shows that injection molding and programming process conditions have significantly influence on residual stresses in molded shape memory polyurethane parts.

  9. Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth

    NASA Astrophysics Data System (ADS)

    Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.

    2014-03-01

    An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI. Number of processors used: 1-50,000 RAM: Memory requirements depend on the grid size Classification: 6.5, 7.7. External routines: MPI (http://www.mcs.anl.gov/research/projects/mpi/), HDF5 (http://www.hdfgroup.org/HDF5/) Nature of problem: Dendritic growth in undercooled Al-3 wt% Cu alloy melt under forced convection. Solution method: The lattice Boltzmann model solves the diffusion, convection, and heat transfer phenomena. The cellular automaton technique is deployed to track the solid/liquid interface. Restrictions: Heat transfer is calculated uncoupled from the fluid flow. Thermal diffusivity is constant. Unusual features: Novel technique, utilizing periodic duplication of a pre-grown “incubation” domain, is applied for the scaleup test. Running time: Running time varies from minutes to days depending on the domain size and number of computational cores.

  10. Examining the Association between Patient-Reported Symptoms of Attention and Memory Dysfunction with Objective Cognitive Performance: A Latent Regression Rasch Model Approach.

    PubMed

    Li, Yuelin; Root, James C; Atkinson, Thomas M; Ahles, Tim A

    2016-06-01

    Patient-reported cognition generally exhibits poor concordance with objectively assessed cognitive performance. In this article, we introduce latent regression Rasch modeling and provide a step-by-step tutorial for applying Rasch methods as an alternative to traditional correlation to better clarify the relationship of self-report and objective cognitive performance. An example analysis using these methods is also included. Introduction to latent regression Rasch modeling is provided together with a tutorial on implementing it using the JAGS programming language for the Bayesian posterior parameter estimates. In an example analysis, data from a longitudinal neurocognitive outcomes study of 132 breast cancer patients and 45 non-cancer matched controls that included self-report and objective performance measures pre- and post-treatment were analyzed using both conventional and latent regression Rasch model approaches. Consistent with previous research, conventional analysis and correlations between neurocognitive decline and self-reported problems were generally near zero. In contrast, application of latent regression Rasch modeling found statistically reliable associations between objective attention and processing speed measures with self-reported Attention and Memory scores. Latent regression Rasch modeling, together with correlation of specific self-reported cognitive domains with neurocognitive measures, helps to clarify the relationship of self-report with objective performance. While the majority of patients attribute their cognitive difficulties to memory decline, the Rash modeling suggests the importance of processing speed and initial learning. To encourage the use of this method, a step-by-step guide and programming language for implementation is provided. Implications of this method in cognitive outcomes research are discussed. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Explicit and implicit learning: The case of computer programming

    NASA Astrophysics Data System (ADS)

    Mancy, Rebecca

    The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.

  12. CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.

    PubMed

    Zahery, Mahsa; Maes, Hermine H; Neale, Michael C

    2017-08-01

    We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.

  13. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, So

    2003-11-20

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less

  14. Feasibility study of current pulse induced 2-bit/4-state multilevel programming in phase-change memory

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin

    2017-08-01

    In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.

  15. Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Sourouri, Mohammed; Birger Raknes, Espen

    2017-04-01

    In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.

  16. Triple shape memory polymers by 4D printing

    NASA Astrophysics Data System (ADS)

    Bodaghi, M.; Damanpack, A. R.; Liao, W. H.

    2018-06-01

    This article aims at introducing triple shape memory polymers (SMPs) by four-dimensional (4D) printing technology and shaping adaptive structures for mechanical/bio-medical devices. The main approach is based on arranging hot–cold programming of SMPs with fused decomposition modeling technology to engineer adaptive structures with triple shape memory effect (SME). Experiments are conducted to characterize elasto-plastic and hyper-elastic thermo-mechanical material properties of SMPs in low and high temperatures at large deformation regime. The feasibility of the dual and triple SMPs with self-bending features is demonstrated experimentally. It is advantageous in situations either where it is desired to perform mechanical manipulations on the 4D printed objects for specific purposes or when they experience cold programming inevitably before activation. A phenomenological 3D constitutive model is developed for quantitative understanding of dual/triple SME of SMPs fabricated by 4D printing in the large deformation range. Governing equations of equilibrium are established for adaptive structures on the basis of the nonlinear Green–Lagrange strains. They are then solved by developing a finite element approach along with an elastic-predictor plastic-corrector return map procedure accomplished by the Newton–Raphson method. The computational tool is applied to simulate dual/triple SMP structures enabled by 4D printing and explore hot–cold programming mechanisms behind material tailoring. It is shown that the 4D printed dual/triple SMPs have great potential in mechanical/bio-medical applications such as self-bending gripers/stents and self-shrinking/tightening staples.

  17. Method for programming a flash memory

    DOEpatents

    Brosky, Alexander R.; Locke, William N.; Maher, Conrado M.

    2016-08-23

    A method of programming a flash memory is described. The method includes partitioning a flash memory into a first group having a first level of write-protection, a second group having a second level of write-protection, and a third group having a third level of write-protection. The write-protection of the second and third groups is disabled using an installation adapter. The third group is programmed using a Software Installation Device.

  18. MPF: A portable message passing facility for shared memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.; Mcguire, Patrick J.

    1987-01-01

    The design, implementation, and performance evaluation of a message passing facility (MPF) for shared memory multiprocessors are presented. The MPF is based on a message passing model conceptually similar to conversations. Participants (parallel processors) can enter or leave a conversation at any time. The message passing primitives for this model are implemented as a portable library of C function calls. The MPF is currently operational on a Sequent Balance 21000, and several parallel applications were developed and tested. Several simple benchmark programs are presented to establish interprocess communication performance for common patterns of interprocess communication. Finally, performance figures are presented for two parallel applications, linear systems solution, and iterative solution of partial differential equations.

  19. Statistical Deviations From the Theoretical Only-SBU Model to Estimate MCU Rates in SRAMs

    NASA Astrophysics Data System (ADS)

    Franco, Francisco J.; Clemente, Juan Antonio; Baylac, Maud; Rey, Solenne; Villa, Francesca; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul

    2017-08-01

    This paper addresses a well-known problem that occurs when memories are exposed to radiation: the determination if a bit flip is isolated or if it belongs to a multiple event. As it is unusual to know the physical layout of the memory, this paper proposes to evaluate the statistical properties of the sets of corrupted addresses and to compare the results with a mathematical prediction model where all of the events are single bit upsets. A set of rules easy to implement in common programming languages can be iteratively applied if anomalies are observed, thus yielding a classification of errors quite closer to reality (more than 80% accuracy in our experiments).

  20. Numerical Study of the Plasticity-Induced Stabilization Effect on Martensitic Transformations in Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Hempel, Philipp

    2017-12-01

    It is well known that plastic deformations in shape memory alloys stabilize the martensitic phase. Furthermore, the knowledge concerning the plastic state is crucial for a reliable sustainability analysis of construction parts. Numerical simulations serve as a tool for the realistic investigation of the complex interactions between phase transformations and plastic deformations. To account also for irreversible deformations, we expand an energy-based material model by including a non-linear isotropic hardening plasticity model. An implementation of this material model into commercial finite element programs, e.g., Abaqus, offers the opportunity to analyze entire structural components at low costs and fast computation times. Along with the theoretical derivation and expansion of the model, several simulation results for various boundary value problems are presented and interpreted for improved construction designing.

  1. Model for mapping settlements

    DOEpatents

    Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.

    2016-07-05

    A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.

  2. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  3. Randomized Controlled Trial of Exercise for ADHD and Disruptive Behavior Disorders.

    PubMed

    Bustamante, Eduardo Esteban; Davis, Catherine Lucy; Frazier, Stacy Lynn; Rusch, Dana; Fogg, Louis F; Atkins, Marc S; Marquez, David Xavier

    2016-07-01

    The objective of this study is to test the feasibility and impact of a 10-wk after-school exercise program for children with attention deficit hyperactivity disorder and/or disruptive behavior disorders living in an urban poor community. Children were randomized to an exercise program (n = 19) or a comparable but sedentary attention control program (n = 16). Cognitive and behavioral outcomes were collected pre-/posttest. Intent-to-treat mixed models tested group-time and group-time-attendance interactions. Effect sizes were calculated within and between groups. Feasibility was evidenced by 86% retention, 60% attendance, and average 75% maximum HR. Group-time results were null on the primary outcome, parent-reported executive function. Among secondary outcomes, between-group effect sizes favored exercise on hyperactive symptoms (d = 0.47) and verbal working memory (d = 0.26), and controls on visuospatial working memory (d = -0.21) and oppositional defiant symptoms (d = -0.37). In each group, within-group effect sizes were moderate to large on most outcomes (d = 0.67 to 1.60). A group-time-attendance interaction emerged on visuospatial working memory (F[1,33] = 7.42, P < 0.05), such that attendance to the control program was related to greater improvements (r = 0.72, P < 0.01), whereas attendance to the exercise program was not (r = 0.25, P = 0.34). Although between-group findings on the primary outcome, parent-reported executive function, were null, between-group effect sizes on hyperactivity and visuospatial working memory may reflect adaptations to the specific challenges presented by distinct formats. Both groups demonstrated substantial within-group improvements on clinically relevant outcomes. Findings underscore the importance of programmatic features, such as routines, engaging activities, behavior management strategies, and adult attention, and highlight the potential for after-school programs to benefit children with attention deficit hyperactivity disorder and disruptive behavior disorder living in urban poverty where health needs are high and services resources few.

  4. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  5. Low latency and persistent data storage

    DOEpatents

    Fitch, Blake G; Franceschini, Michele M; Jagmohan, Ashish; Takken, Todd

    2014-11-04

    Persistent data storage is provided by a computer program product that includes computer program code configured for receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed.

  6. SMT-Aware Instantaneous Footprint Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Probir; Liu, Xu; Song, Shuaiwen

    Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.

  7. Neuro-Cognitive Intervention for Working Memory: Preliminary Results and Future Directions.

    PubMed

    Bree, Kathleen D; Beljan, Paul

    2016-01-01

    Definitions of working memory identify it as a function of the executive function system in which an individual maintains two or more pieces of information in mind and uses that information simultaneously for some purpose. In academics, working memory is necessary for a variety of functions, including attending to the information one's teacher presents and then using that information simultaneously for problem solving. Research indicates difficulties with working memory are observed in children with mathematics learning disorder (MLD) and reading disorders (RD). To improve working memory and other executive function difficulties, and as an alternative to medication treatments for attention and executive function disorders, the Motor Cognition(2)® (MC(2)®)program was developed. Preliminary research on this program indicates statistically significant improvements in working memory, mathematics, and nonsense word decoding for reading. Further research on the MC(2)® program and its impact on working memory, as well as other areas of executive functioning, is warranted.

  8. Genome-wide Functional Analysis of CREB/Long-Term Memory-Dependent Transcription Reveals Distinct Basal and Memory Gene Expression Programs

    PubMed Central

    Lakhina, Vanisha; Arey, Rachel N.; Kaletsky, Rachel; Kauffman, Amanda; Stein, Geneva; Keyes, William; Xu, Daniel; Murphy, Coleen T.

    2014-01-01

    SUMMARY Induced CREB activity is a hallmark of long-term memory, but the full repertoire of CREB transcriptional targets required specifically for memory is not known in any system. To obtain a more complete picture of the mechanisms involved in memory, we combined memory training with genome-wide transcriptional analysis of C. elegans CREB mutants. This approach identified 757 significant CREB/memory-induced targets and confirmed the involvement of known memory genes from other organisms, but also suggested new mechanisms and novel components that may be conserved through mammals. CREB mediates distinct basal and memory transcriptional programs at least partially through spatial restriction of CREB activity: basal targets are regulated primarily in nonneuronal tissues, while memory targets are enriched for neuronal expression, emanating from CREB activity in AIM neurons. This suite of novel memory-associated genes will provide a platform for the discovery of orthologous mammalian long-term memory components. PMID:25611510

  9. Memory for radio advertisements: the effect of program and typicality.

    PubMed

    Martín-Luengo, Beatriz; Luna, Karlos; Migueles, Malen

    2013-01-01

    We examined the influence of the type of radio program on the memory for radio advertisements. We also investigated the role in memory of the typicality (high or low) of the elements of the products advertised. Participants listened to three types of programs (interesting, boring, enjoyable) with two advertisements embedded in each. After completing a filler task, the participants performed a true/false recognition test. Hits and false alarm rates were higher for the interesting and enjoyable programs than for the boring one. There were also more hits and false alarms for the high-typicality elements. The response criterion for the advertisements embedded in the boring program was stricter than for the advertisements in other types of programs. We conclude that the type of program in which an advertisement is inserted and the nature of the elements of the advertisement affect both the number of hits and false alarms and the response criterion, but not the accuracy of the memory.

  10. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  11. Electrically programmable-erasable In-Ga-Zn-O thin-film transistor memory with atomic-layer-deposited Al{sub 2}O{sub 3}/Pt nanocrystals/Al{sub 2}O{sub 3} gate stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Shi-Bing; Zhang, Wen-Peng; Liu, Wen-Jun

    Amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistor (TFT) memory is very promising for transparent and flexible system-on-panel displays; however, electrical erasability has always been a severe challenge for this memory. In this article, we demonstrated successfully an electrically programmable-erasable memory with atomic-layer-deposited Al{sub 2}O{sub 3}/Pt nanocrystals/Al{sub 2}O{sub 3} gate stack under a maximal processing temperature of 300 {sup o}C. As the programming voltage was enhanced from 14 to 19 V for a constant pulse of 0.2 ms, the threshold voltage shift increased significantly from 0.89 to 4.67 V. When the programmed device was subjected to an appropriate pulse under negative gatemore » bias, it could return to the original state with a superior erasing efficiency. The above phenomena could be attributed to Fowler-Nordheim tunnelling of electrons from the IGZO channel to the Pt nanocrystals during programming, and inverse tunnelling of the trapped electrons during erasing. In terms of 0.2-ms programming at 16 V and 350-ms erasing at −17 V, a large memory window of 3.03 V was achieved successfully. Furthermore, the memory exhibited stable repeated programming/erasing (P/E) characteristics and good data retention, i.e., for 2-ms programming at 14 V and 250-ms erasing at −14 V, a memory window of 2.08 V was still maintained after 10{sup 3} P/E cycles, and a memory window of 1.1 V was retained after 10{sup 5} s retention time.« less

  12. PATSTAGS - PATRAN-STAGSC-1 TRANSLATOR

    NASA Technical Reports Server (NTRS)

    Otte, N. E.

    1994-01-01

    PATSTAGS translates PATRAN finite model data into STAGS (Structural Analysis of General Shells) input records to be used for engineering analysis. The program reads data from a PATRAN neutral file and writes STAGS input records into a STAGS input file and a UPRESS data file. It is able to support translations of nodal constraints, nodal, element, force and pressure data. PATSTAGS uses three files: the PATRAN neutral file to be translated, a STAGS input file and a STAGS pressure data file. The user provides the names for the neutral file and the desired names of the STAGS files to be created. The pressure data file contains the element live pressure data used in the STAGS subroutine UPRESS. PATSTAGS is written in FORTRAN 77 for DEC VAX series computers running VMS. The main memory requirement for execution is approximately 790K of virtual memory. Output blocks can be modified to output the data in any format desired, allowing the program to be used to translate model data to analysis codes other than STAGSC-1 (HQN-10967). This program is available in DEC VAX BACKUP format on a 9-track magnetic tape or TK50 tape cartridge. Documentation is included in the price of the program. PATSTAGS was developed in 1990. DEC, VAX, TK50 and VMS are trademarks of Digital Equipment Corporation.

  13. Human memory CD8 T cell effector potential is epigenetically preserved during in vivo homeostasis.

    PubMed

    Abdelsamed, Hossam A; Moustaki, Ardiana; Fan, Yiping; Dogra, Pranay; Ghoneim, Hazem E; Zebley, Caitlin C; Triplett, Brandon M; Sekaly, Rafick-Pierre; Youngblood, Ben

    2017-06-05

    Antigen-independent homeostasis of memory CD8 T cells is vital for sustaining long-lived T cell-mediated immunity. In this study, we report that maintenance of human memory CD8 T cell effector potential during in vitro and in vivo homeostatic proliferation is coupled to preservation of acquired DNA methylation programs. Whole-genome bisulfite sequencing of primary human naive, short-lived effector memory (T EM ), and longer-lived central memory (T CM ) and stem cell memory (T SCM ) CD8 T cells identified effector molecules with demethylated promoters and poised for expression. Effector-loci demethylation was heritably preserved during IL-7- and IL-15-mediated in vitro cell proliferation. Conversely, cytokine-driven proliferation of T CM and T SCM memory cells resulted in phenotypic conversion into T EM cells and was coupled to increased methylation of the CCR7 and Tcf7 loci. Furthermore, haploidentical donor memory CD8 T cells undergoing in vivo proliferation in lymphodepleted recipients also maintained their effector-associated demethylated status but acquired T EM -associated programs. These data demonstrate that effector-associated epigenetic programs are preserved during cytokine-driven subset interconversion of human memory CD8 T cells. © 2017 Abdelsamed et al.

  14. Bethany Frew | NREL

    Science.gov Websites

    Research/Teaching Assistant, Stanford University, Stanford, CA (2007-2014) Research Intern, Battelle Analysis Center. Areas of Expertise Energy systems modeling and analysis Linear programming Research Memorial Institute, Columbus, OH (2006-2007) Research Assistant, The Ohio State University, Columbus, OH

  15. Knowledge Representation: A Brief Review.

    ERIC Educational Resources Information Center

    Vickery, B. C.

    1986-01-01

    Reviews different structures and techniques of knowledge representation: structure of database records and files, data structures in computer programming, syntatic and semantic structure of natural language, knowledge representation in artificial intelligence, and models of human memory. A prototype expert system that makes use of some of these…

  16. GPU acceleration of particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Cowan, Benjamin; Cary, John; Meiser, Dominic

    2015-11-01

    Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA contract W31P4Q-15-C-0061 (SBIR).

  17. Phase-change memory: A continuous multilevel compact model of subthreshold conduction and threshold switching

    NASA Astrophysics Data System (ADS)

    Pigot, Corentin; Gilibert, Fabien; Reyboz, Marina; Bocquet, Marc; Zuliani, Paola; Portal, Jean-Michel

    2018-04-01

    Phase-change memory (PCM) compact modeling of the threshold switching based on a thermal runaway in Poole–Frenkel conduction is proposed. Although this approach is often used in physical models, this is the first time it is implemented in a compact model. The model accuracy is validated by a good correlation between simulations and experimental data collected on a PCM cell embedded in a 90 nm technology. A wide range of intermediate states is measured and accurately modeled with a single set of parameters, allowing multilevel programing. A good convergence is exhibited even in snapback simulation owing to this fully continuous approach. Moreover, threshold properties extraction indicates a thermally enhanced switching, which validates the basic hypothesis of the model. Finally, it is shown that this model is compliant with a new drift-resilient cell-state metric. Once enriched with a phase transition module, this compact model is ready to be implemented in circuit simulators.

  18. Addiction memory as a specific, individually learned memory imprint.

    PubMed

    Böning, J

    2009-05-01

    The construct of "addiction memory" (AM) and its importance for relapse occurrence has been the subject of discussion for the past 30 years. Neurobiological findings from "social neuroscience" and biopsychological learning theory, in conjunction with construct-valid behavioral pharmacological animal models, can now also provide general confirmation of addiction memory as a pathomorphological correlate of addiction disorders. Under multifactorial influences, experience-driven neuronal learning and memory processes of emotional and cognitive processing patterns in the specific individual "set" and "setting" play an especially pivotal role in this connection. From a neuropsychological perspective, the episodic (biographical) memory, located at the highest hierarchical level, is of central importance for the formation of the AM in certain structural and functional areas of the brain and neuronal networks. Within this context, neuronal learning and conditioning processes take place more or less unconsciously and automatically in the preceding long-term-memory systems (in particular priming and perceptual memory). They then regulate the individually programmed addiction behavior implicitly and thus subsequently stand for facilitated recollection of corresponding, previously stored cues or context situations. This explains why it is so difficult to treat an addiction memory, which is embedded above all in the episodic memory, from the molecular carrier level via the neuronal pattern level through to the psychological meaning level, and has thus meanwhile become a component of personality.

  19. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  20. Working Memory Training Does Not Improve Performance on Measures of Intelligence or Other Measures of “Far Transfer”

    PubMed Central

    Melby-Lervåg, Monica; Redick, Thomas S.; Hulme, Charles

    2016-01-01

    It has been claimed that working memory training programs produce diverse beneficial effects. This article presents a meta-analysis of working memory training studies (with a pretest-posttest design and a control group) that have examined transfer to other measures (nonverbal ability, verbal ability, word decoding, reading comprehension, or arithmetic; 87 publications with 145 experimental comparisons). Immediately following training there were reliable improvements on measures of intermediate transfer (verbal and visuospatial working memory). For measures of far transfer (nonverbal ability, verbal ability, word decoding, reading comprehension, arithmetic) there was no convincing evidence of any reliable improvements when working memory training was compared with a treated control condition. Furthermore, mediation analyses indicated that across studies, the degree of improvement on working memory measures was not related to the magnitude of far-transfer effects found. Finally, analysis of publication bias shows that there is no evidential value from the studies of working memory training using treated controls. The authors conclude that working memory training programs appear to produce short-term, specific training effects that do not generalize to measures of “real-world” cognitive skills. These results seriously question the practical and theoretical importance of current computerized working memory programs as methods of training working memory skills. PMID:27474138

  1. MaMR: High-performance MapReduce programming model for material cloud applications

    NASA Astrophysics Data System (ADS)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  2. 45 CFR 2490.149 - Program accessibility: Discrimination prohibited.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... 2490.149 Section 2490.149 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.149 Program...

  3. 45 CFR 2490.149 - Program accessibility: Discrimination prohibited.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... 2490.149 Section 2490.149 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.149 Program...

  4. 45 CFR 2490.149 - Program accessibility: Discrimination prohibited.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... 2490.149 Section 2490.149 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.149 Program...

  5. 45 CFR 2490.149 - Program accessibility: Discrimination prohibited.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... 2490.149 Section 2490.149 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.149 Program...

  6. 45 CFR 2490.149 - Program accessibility: Discrimination prohibited.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 2490.149 Section 2490.149 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.149 Program...

  7. Bim controls IL-15 availability and limits engagement of multiple BH3-only proteins

    PubMed Central

    Kurtulus, S; Sholl, A; Toe, J; Tripathi, P; Raynor, J; Li, K-P; Pellegrini, M; Hildeman, D A

    2015-01-01

    During the effector CD8+ T-cell response, transcriptional differentiation programs are engaged that promote effector T cells with varying memory potential. Although these differentiation programs have been used to explain which cells die as effectors and which cells survive and become memory cells, it is unclear if the lack of cell death enhances memory. Here, we investigated effector CD8+ T-cell fate in mice whose death program has been largely disabled because of the loss of Bim. Interestingly, the absence of Bim resulted in a significant enhancement of effector CD8+ T cells with more memory potential. Bim-driven control of memory T-cell development required T-cell-specific, but not dendritic cell-specific, expression of Bim. Both total and T-cell-specific loss of Bim promoted skewing toward memory precursors, by enhancing the survival of memory precursors, and limiting the availability of IL-15. Decreased IL-15 availability in Bim-deficient mice facilitated the elimination of cells with less memory potential via the additional pro-apoptotic molecules Noxa and Puma. Combined, these data show that Bim controls memory development by limiting the survival of pre-memory effector cells. Further, by preventing the consumption of IL-15, Bim limits the role of Noxa and Puma in causing the death of effector cells with less memory potential. PMID:25124553

  8. Bim controls IL-15 availability and limits engagement of multiple BH3-only proteins.

    PubMed

    Kurtulus, S; Sholl, A; Toe, J; Tripathi, P; Raynor, J; Li, K-P; Pellegrini, M; Hildeman, D A

    2015-01-01

    During the effector CD8+ T-cell response, transcriptional differentiation programs are engaged that promote effector T cells with varying memory potential. Although these differentiation programs have been used to explain which cells die as effectors and which cells survive and become memory cells, it is unclear if the lack of cell death enhances memory. Here, we investigated effector CD8+ T-cell fate in mice whose death program has been largely disabled because of the loss of Bim. Interestingly, the absence of Bim resulted in a significant enhancement of effector CD8+ T cells with more memory potential. Bim-driven control of memory T-cell development required T-cell-specific, but not dendritic cell-specific, expression of Bim. Both total and T-cell-specific loss of Bim promoted skewing toward memory precursors, by enhancing the survival of memory precursors, and limiting the availability of IL-15. Decreased IL-15 availability in Bim-deficient mice facilitated the elimination of cells with less memory potential via the additional pro-apoptotic molecules Noxa and Puma. Combined, these data show that Bim controls memory development by limiting the survival of pre-memory effector cells. Further, by preventing the consumption of IL-15, Bim limits the role of Noxa and Puma in causing the death of effector cells with less memory potential.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Juhee; Lee, Sungpyo; Lee, Moo Hyung

    Quasi-unipolar non-volatile organic transistor memory (NOTM) can combine the best characteristics of conventional unipolar and ambipolar NOTMs and, as a result, exhibit improved device performance. Unipolar NOTMs typically exhibit a large signal ratio between the programmed and erased current signals but also require a large voltage to program and erase the memory cells. Meanwhile, an ambipolar NOTM can be programmed and erased at lower voltages, but the resulting signal ratio is small. By embedding a discontinuous n-type fullerene layer within a p-type pentacene film, quasi-unipolar NOTMs are fabricated, of which the signal storage utilizes both electrons and holes while themore » electrical signal relies on only hole conduction. These devices exhibit superior memory performance relative to both pristine unipolar pentacene devices and ambipolar fullerene/pentacene bilayer devices. The quasi-unipolar NOTM exhibited a larger signal ratio between the programmed and erased states while also reducing the voltage required to program and erase a memory cell. This simple approach should be readily applicable for various combinations of advanced organic semiconductors that have been recently developed and thereby should make a significant impact on organic memory research.« less

  10. Injecting Artificial Memory Errors Into a Running Computer Program

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  11. Benefits of a Classroom Based Instrumental Music Program on Verbal Memory of Primary School Children: A Longitudinal Study

    ERIC Educational Resources Information Center

    Rickard, Nikki S.; Vasquez, Jorge T.; Murphy, Fintan; Gill, Anneliese; Toukhsati, Samia R.

    2010-01-01

    Previous research has demonstrated a benefit of music training on a number of cognitive functions including verbal memory performance. The impact of school-based music programs on memory processes is however relatively unknown. The current study explored the effect of increasing frequency and intensity of classroom-based instrumental training…

  12. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debenedictis, Erik P.

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integratesmore » processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.« less

  13. NAS Parallel Benchmark. Results 11-96: Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.

  14. 45 CFR 2490.151 - Program accessibility: New construction and alterations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... alterations. 2490.151 Section 2490.151 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.151 Program...

  15. 45 CFR 2490.151 - Program accessibility: New construction and alterations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... alterations. 2490.151 Section 2490.151 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.151 Program...

  16. 45 CFR 2490.151 - Program accessibility: New construction and alterations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... alterations. 2490.151 Section 2490.151 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.151 Program...

  17. 45 CFR 2490.151 - Program accessibility: New construction and alterations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... alterations. 2490.151 Section 2490.151 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.151 Program...

  18. 45 CFR 2490.151 - Program accessibility: New construction and alterations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... alterations. 2490.151 Section 2490.151 Public Welfare Regulations Relating to Public Welfare (Continued) JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE JAMES MADISON MEMORIAL FELLOWSHIP FOUNDATION § 2490.151 Program...

  19. What we remember affects how we see: spatial working memory steers saccade programming.

    PubMed

    Wong, Jason H; Peterson, Matthew S

    2013-02-01

    Relationships between visual attention, saccade programming, and visual working memory have been hypothesized for over a decade. Awh, Jonides, and Reuter-Lorenz (Journal of Experimental Psychology: Human Perception and Performance 24(3):780-90, 1998) and Awh et al. (Psychological Science 10(5):433-437, 1999) proposed that rehearsing a location in memory also leads to enhanced attentional processing at that location. In regard to eye movements, Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) found that holding a location in working memory affects saccade programming, albeit negatively. In three experiments, we attempted to replicate the findings of Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009) and determine whether the spatial memory effect can occur in other saccade-cuing paradigms, including endogenous central arrow cues and exogenous irrelevant singletons. In the first experiment, our results were the opposite of those in Belopolsky and Theeuwes (Attention, Perception & Psychophysics 71(3):620-631, 2009), in that we found facilitation (shorter saccade latencies) instead of inhibition when the saccade target matched the region in spatial working memory. In Experiment 2, we sought to determine whether the spatial working memory effect would generalize to other endogenous cuing tasks, such as a central arrow that pointed to one of six possible peripheral locations. As in Experiment 1, we found that saccade programming was facilitated when the cued location coincided with the saccade target. In Experiment 3, we explored how spatial memory interacts with other types of cues, such as a peripheral color singleton target or irrelevant onset. In both cases, the eyes were more likely to go to either singleton when it coincided with the location held in spatial working memory. On the basis of these results, we conclude that spatial working memory and saccade programming are likely to share common overlapping circuitry.

  20. What’s working in working memory training? An educational perspective

    PubMed Central

    Redick, Thomas S.; Shipstead, Zach; Wiemers, Elizabeth A.; Melby-Lervåg, Monica; Hulme, Charles

    2015-01-01

    Working memory training programs have generated great interest, with claims that the training interventions can have profound beneficial effects on children’s academic and intellectual attainment. We describe the criteria by which to evaluate evidence for or against the benefit of working memory training. Despite the promising results of initial research studies, the current review of all of the available evidence of working memory training efficacy is less optimistic. Our conclusion is that working memory training produces limited benefits in terms of specific gains on short-term and working memory tasks that are very similar to the training programs, but no advantage for academic and achievement-based reading and arithmetic outcomes. PMID:26640352

  1. Regular Latin Dancing and Health Education may Improve Cognition of Late Middle-Aged and Older Latinos

    PubMed Central

    Marquez, David X.; Wilson, Robert; Aguiñaga, Susan; Vásquez, Priscilla; Fogg, Louis; Yang, Zhi; Wilbur, JoEllen; Hughes, Susan; Spanbauer, Charles

    2017-01-01

    Disparities exist between Latinos and non-Latino whites in cognitive function. Dance is culturally appropriate and challenges individuals physically and cognitively, yet the impact of regular dancing on cognitive function in older Latinos has not been examined. A two-group pilot trial was employed among inactive, older Latinos. Participants (N = 57) participated in the BAILAMOS© dance program or a health education program. Cognitive test scores were converted to z-scores and measures of global cognition and specific domains (executive function, episodic memory, working memory) were derived. Results revealed a group × time interaction for episodic memory (p<0.05), such that the dance group showed greater improvement in episodic memory than the health education group. A main effect for time for global cognition (p<0.05) was also demonstrated, with participants in both groups improving. Structured Latin dance programs can positively influence episodic memory; and participation in structured programs may improve overall cognition among older Latinos. PMID:28095105

  2. Regular Latin Dancing and Health Education May Improve Cognition of Late Middle-Aged and Older Latinos.

    PubMed

    Marquez, David X; Wilson, Robert; Aguiñaga, Susan; Vásquez, Priscilla; Fogg, Louis; Yang, Zhi; Wilbur, JoEllen; Hughes, Susan; Spanbauer, Charles

    2017-07-01

    Disparities exist between Latinos and non-Latino Whites in cognitive function. Dance is culturally appropriate and challenges individuals physically and cognitively, yet the impact of regular dancing on cognitive function in older Latinos has not been examined. A two-group pilot trial was employed among inactive, older Latinos. Participants (N = 57) participated in the BAILAMOS © dance program or a health education program. Cognitive test scores were converted to z-scores and measures of global cognition and specific domains (executive function, episodic memory, working memory) were derived. Results revealed a group × time interaction for episodic memory (p < .05), such that the dance group showed greater improvement in episodic memory than the health education group. A main effect for time for global cognition (p < .05) was also demonstrated, with participants in both groups improving. Structured Latin dance programs can positively influence episodic memory, and participation in structured programs may improve overall cognition among older Latinos.

  3. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Song, Y T; Chao, Y

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less

  4. Turbo Pascal Implementation of a Distributed Processing Network of MS-DOS Microcomputers Connected in a Master-Slave Configuration

    DTIC Science & Technology

    1989-12-01

    Interrupt Procedures ....... 29 13. Support for a Larger Memory Model ................ 29 C. IMPLEMENTATION ........................................ 29...describe the programmer’s model of the hardware utilized in the microcomputers and interrupt driven serial communication considerations. Chapter III...Central Processor Unit The programming model of Table 2.1 is common to the Intel 8088, 8086 and 80x86 series of microprocessors used in the IBM PC/AT

  5. Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less

  6. Work stealing for GPU-accelerated parallel programs in a global address space framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less

  7. Two demonstrators and a simulator for a sparse, distributed memory

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.

    1987-01-01

    Described are two programs demonstrating different aspects of Kanerva's Sparse, Distributed Memory (SDM). These programs run on Sun 3 workstations, one using color, and have straightforward graphically oriented user interfaces and graphical output. Presented are descriptions of the programs, how to use them, and what they show. Additionally, this paper describes the software simulator behind each program.

  8. Contemplating the GANE model using an extreme case paradigm.

    PubMed

    Geva, Ronny

    2016-01-01

    Early experiences play a crucial role in programming brain function, affecting selective attention, learning, and memory. Infancy literature suggests an extension of the GANE (glutamate amplifies noradrenergic effects) model to conditions with minimal priority-map inputs, yet suggests qualifications by noting that its efficacy is increased when tonic levels of arousal are maintained in an optimal range, in manners that are age and exposure dependent.

  9. Working Memory Training Does Not Improve Performance on Measures of Intelligence or Other Measures of "Far Transfer": Evidence From a Meta-Analytic Review.

    PubMed

    Melby-Lervåg, Monica; Redick, Thomas S; Hulme, Charles

    2016-07-01

    It has been claimed that working memory training programs produce diverse beneficial effects. This article presents a meta-analysis of working memory training studies (with a pretest-posttest design and a control group) that have examined transfer to other measures (nonverbal ability, verbal ability, word decoding, reading comprehension, or arithmetic; 87 publications with 145 experimental comparisons). Immediately following training there were reliable improvements on measures of intermediate transfer (verbal and visuospatial working memory). For measures of far transfer (nonverbal ability, verbal ability, word decoding, reading comprehension, arithmetic) there was no convincing evidence of any reliable improvements when working memory training was compared with a treated control condition. Furthermore, mediation analyses indicated that across studies, the degree of improvement on working memory measures was not related to the magnitude of far-transfer effects found. Finally, analysis of publication bias shows that there is no evidential value from the studies of working memory training using treated controls. The authors conclude that working memory training programs appear to produce short-term, specific training effects that do not generalize to measures of "real-world" cognitive skills. These results seriously question the practical and theoretical importance of current computerized working memory programs as methods of training working memory skills. © The Author(s) 2016.

  10. Research: Survey of Tribal Colleges Reveals Research's Benefits, Obstacles.

    ERIC Educational Resources Information Center

    Mortensen, Margaret; Nelson, Claudia E.; Stauss, Jay

    2001-01-01

    Stresses the need for tribal colleges to increase focus on research at all levels, from institutional to individual. Discusses types of research, obstacles and benefits to research, and model collaborative programs at Dull Knife Memorial College (Montana), Cheyenne River Community College (South Dakota), and Little Priest Tribal College…

  11. Holding on

    ERIC Educational Resources Information Center

    Thaxton, Terry Ann

    2011-01-01

    In this article, the author takes a multidimensional and personal look at creative writing work in an assisted living facility. The people she works with at the facility have memory loss. She shares her experience working with these people and describes a storytelling workshop that was modeled after Timeslips, a program started by Anne Basting at…

  12. Predictive model for early math skills based on structural equations.

    PubMed

    Aragón, Estíbaliz; Navarro, José I; Aguilar, Manuel; Cerda, Gamal; García-Sedeño, Manuel

    2016-12-01

    Early math skills are determined by higher cognitive processes that are particularly important for acquiring and developing skills during a child's early education. Such processes could be a critical target for identifying students at risk for math learning difficulties. Few studies have considered the use of a structural equation method to rationalize these relations. Participating in this study were 207 preschool students ages 59 to 72 months, 108 boys and 99 girls. Performance with respect to early math skills, early literacy, general intelligence, working memory, and short-term memory was assessed. A structural equation model explaining 64.3% of the variance in early math skills was applied. Early literacy exhibited the highest statistical significance (β = 0.443, p < 0.05), followed by intelligence (β = 0.286, p < 0.05), working memory (β = 0.220, p < 0.05), and short-term memory (β = 0.213, p < 0.05). Correlations between the independent variables were also significant (p < 0.05). According to the results, cognitive variables should be included in remedial intervention programs. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  13. Effects of a School-Based Instrumental Music Program on Verbal and Visual Memory in Primary School Children: A Longitudinal Study

    PubMed Central

    Roden, Ingo; Kreutz, Gunter; Bongard, Stephan

    2012-01-01

    This study examined the effects of a school-based instrumental training program on the development of verbal and visual memory skills in primary school children. Participants either took part in a music program with weekly 45 min sessions of instrumental lessons in small groups at school, or they received extended natural science training. A third group of children did not receive additional training. Each child completed verbal and visual memory tests three times over a period of 18 months. Significant Group by Time interactions were found in the measures of verbal memory. Children in the music group showed greater improvements than children in the control groups after controlling for children’s socio-economic background, age, and IQ. No differences between groups were found in the visual memory tests. These findings are consistent with and extend previous research by suggesting that children receiving music training may benefit from improvements in their verbal memory skills. PMID:23267341

  14. Randomized Controlled Trial of Exercise for ADHD and Disruptive Behavior Disorders

    PubMed Central

    Bustamante, Eduardo E.; Davis, Catherine L.; Frazier, Stacy L.; Rusch, Dana; Fogg, Louis F.; Atkins, Marc S.; Marquez, David X.

    2016-01-01

    Purpose To test feasibility and impact of a 10-week after-school exercise program for children with ADHD and/or disruptive behavior disorders (DBD) living in an urban poor community. Methods Children were randomized to exercise (n=19) or a comparable but sedentary attention control program (n=16). Cognitive and behavioral outcomes were collected pre-post. Intent-to-treat mixed models tested group × time and group × time × attendance interactions. Effect sizes were calculated within and between groups. Results Feasibility was evidenced by 86% retention, 60% attendance, and average 75% maximum heart rate. Group × time results were null on the primary outcome, parent-reported executive function. Among secondary outcomes, between-group effect sizes favored exercise on hyperactive symptoms (d=0.47) and verbal working memory (d=0.26), and controls on visuospatial working memory (d=-0.21) and oppositional defiant symptoms (d=-0.37). In each group, within-group effect sizes were moderate-large on most outcomes (d=0.67 to 1.60). A group × time × attendance interaction emerged on visuospatial working memory (F[1,33]=7.42, p<.05), such that attendance to the control program was related to greater improvements (r=.72, p<.01) while attendance to the exercise program was not (r=.25, p=.34). Conclusions While between-group findings on the primary outcome, parent-reported executive function, were null, between-group effect sizes on hyperactivity and visuospatial working memory may reflect adaptations to the specific challenges presented by distinct formats. Both groups demonstrated substantial within-group improvements on clinically relevant outcomes. Findings underscore the importance of programmatic features such as routines, engaging activities, behavior management strategies, and adult attention; and highlight the potential for after-school programs to benefit children with ADHD and DBD living in urban poverty where health needs are high and services resources few. PMID:26829000

  15. Multiprocessor architecture: Synthesis and evaluation

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  16. Memory protection

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.

  17. A depth-first search algorithm to compute elementary flux modes by linear programming

    PubMed Central

    2014-01-01

    Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068

  18. The Science of Computing: Virtual Memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1986-01-01

    In the March-April issue, I described how a computer's storage system is organized as a hierarchy consisting of cache, main memory, and secondary memory (e.g., disk). The cache and main memory form a subsystem that functions like main memory but attains speeds approaching cache. What happens if a program and its data are too large for the main memory? This is not a frivolous question. Every generation of computer users has been frustrated by insufficient memory. A new line of computers may have sufficient storage for the computations of its predecessor, but new programs will soon exhaust its capacity. In 1960, a longrange planning committee at MIT dared to dream of a computer with 1 million words of main memory. In 1985, the Cray-2 was delivered with 256 million words. Computational physicists dream of computers with 1 billion words. Computer architects have done an outstanding job of enlarging main memories yet they have never kept up with demand. Only the shortsighted believe they can.

  19. Multi-layered epigenetic mechanisms contribute to transcriptional memory in T lymphocytes.

    PubMed

    Dunn, Jennifer; McCuaig, Robert; Tu, Wen Juan; Hardy, Kristine; Rao, Sudha

    2015-05-06

    Immunological memory is the ability of the immune system to respond more rapidly and effectively to previously encountered pathogens, a key feature of adaptive immunity. The capacity of memory T cells to "remember" previous cellular responses to specific antigens ultimately resides in their unique patterns of gene expression. Following re-exposure to an antigen, previously activated genes are transcribed more rapidly and robustly in memory T cells compared to their naïve counterparts. The ability for cells to remember past transcriptional responses is termed "adaptive transcriptional memory". Recent global epigenome studies suggest that epigenetic mechanisms are central to establishing and maintaining transcriptional memory, with elegant studies in model organisms providing tantalizing insights into the epigenetic programs that contribute to adaptive immunity. These epigenetic mechanisms are diverse, and include not only classical acetylation and methylation events, but also exciting and less well-known mechanisms involving histone structure, upstream signalling pathways, and nuclear localisation of genomic regions. Current global health challenges in areas such as tuberculosis and influenza demand not only more effective and safer vaccines, but also vaccines for a wider range of health priorities, including HIV, cancer, and emerging pathogens such as Ebola. Understanding the multi-layered epigenetic mechanisms that underpin the rapid recall responses of memory T cells following reactivation is a critical component of this development pathway.

  20. Nicotine Inhibits Memory CTL Programming

    PubMed Central

    Sun, Zhifeng; Smyth, Kendra; Garcia, Karla; Mattson, Elliot; Li, Lei; Xiao, Zhengguo

    2013-01-01

    Nicotine is the main tobacco component responsible for tobacco addiction and is used extensively in smoking and smoking cessation therapies. However, little is known about its effects on the immune system. We confirmed that multiple nicotinic receptors are expressed on mouse and human cytotoxic T lymphocytes (CTLs) and demonstrated that nicotinic receptors on mouse CTLs are regulated during activation. Acute nicotine presence during activation increases primary CTL expansion in vitro, but impairs in vivo expansion after transfer and subsequent memory CTL differentiation, which reduces protection against subsequent pathogen challenges. Furthermore, nicotine abolishes the regulatory effect of rapamycin on memory CTL programming, which can be attributed to the fact that rapamycin enhances expression of nicotinic receptors. Interestingly, naïve CTLs from chronic nicotine-treated mice have normal memory programming, which is impaired by nicotine during activation in vitro. In conclusion, simultaneous exposure to nicotine and antigen during CTL activation negatively affects memory development. PMID:23844169

  1. Kokkos: Enabling manycore performance portability through polymorphic memory access patterns

    DOE PAGES

    Carter Edwards, H.; Trott, Christian R.; Sunderland, Daniel

    2014-07-22

    The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diversemore » manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.« less

  2. Violence and sex impair memory for television ads.

    PubMed

    Bushman, Brad J; Bonacci, Angelica M

    2002-06-01

    Participants watched a violent, sexually explicit, or neutral TV program that contained 9 ads. Participants recalled the advertised brands. They also identified the advertised brands from slides of supermarket shelves. The next day, participants were telephoned and asked to recall again the advertised brands. Results showed better memory for people who saw the ads during a neutral program than for people who saw the ads during a violent or sexual program both immediately after exposure and 24 hr later. Violence and sex impaired memory for males and females of all ages, regardless of whether they liked programs containing violence and sex. These results suggest that sponsoring violent and sexually explicit TV programs might not be a profitable venture for advertisers.

  3. Simulation of n-qubit quantum systems. III. Quantum operations

    NASA Astrophysics Data System (ADS)

    Radtke, T.; Fritzsche, S.

    2007-05-01

    During the last decade, several quantum information protocols, such as quantum key distribution, teleportation or quantum computation, have attracted a lot of interest. Despite the recent success and research efforts in quantum information processing, however, we are just at the beginning of understanding the role of entanglement and the behavior of quantum systems in noisy environments, i.e. for nonideal implementations. Therefore, in order to facilitate the investigation of entanglement and decoherence in n-qubit quantum registers, here we present a revised version of the FEYNMAN program for working with quantum operations and their associated (Jamiołkowski) dual states. Based on the implementation of several popular decoherence models, we provide tools especially for the quantitative analysis of quantum operations. Apart from the implementation of different noise models, the current program extension may help investigate the fragility of many quantum states, one of the main obstacles in realizing quantum information protocols today. Program summaryTitle of program: Feynman Catalogue identifier: ADWE_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v3_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Operating systems: Any system that supports MAPLE; tested under Microsoft Windows XP, SuSe Linux 10 Program language used:MAPLE 10 Typical time and memory requirements: Most commands that act upon quantum registers with five or less qubits take ⩽10 seconds of processor time (on a Pentium 4 processor with ⩾2 GHz or equivalent) and 5-20 MB of memory. Especially when working with symbolic expressions, however, the memory and time requirements critically depend on the number of qubits in the quantum registers, owing to the exponential dimension growth of the associated Hilbert space. For example, complex (symbolic) noise models (with several Kraus operators) for multi-qubit systems often result in very large symbolic expressions that dramatically slow down the evaluation of measures or other quantities. In these cases, MAPLE's assume facility sometimes helps to reduce the complexity of symbolic expressions, but often only numerical evaluation is possible. Since the complexity of the FEYNMAN commands is very different, no general scaling law for the CPU time and memory usage can be given. No. of bytes in distributed program including test data, etc.: 799 265 No. of lines in distributed program including test data, etc.: 18 589 Distribution format: tar.gz Reasons for new version: While the previous program versions were designed mainly to create and manipulate the state of quantum registers, the present extension aims to support quantum operations as the essential ingredient for studying the effects of noisy environments. Does this version supersede the previous version: Yes Nature of the physical problem: Today, entanglement is identified as the essential resource in virtually all aspects of quantum information theory. In most practical implementations of quantum information protocols, however, decoherence typically limits the lifetime of entanglement. It is therefore necessary and highly desirable to understand the evolution of entanglement in noisy environments. Method of solution: Using the computer algebra system MAPLE, we have developed a set of procedures that support the definition and manipulation of n-qubit quantum registers as well as (unitary) logic gates and (nonunitary) quantum operations that act on the quantum registers. The provided hierarchy of commands can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems in ideal and nonideal quantum circuits.

  4. A study of application of remote sensing to river forecasting. Volume 2: Detailed technical report, NASA-IBM streamflow forecast model user's guide

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Model is described along with data preparation, determining model parameters, initializing and optimizing parameters (calibration) selecting control options and interpreting results. Some background information is included, and appendices contain a dictionary of variables, a source program listing, and flow charts. The model was operated on an IBM System/360 Model 44, using a model 2250 keyboard/graphics terminal for interactive operation. The model can be set up and operated in a batch processing mode on any System/360 or 370 that has the memory capacity. The model requires 210K bytes of core storage, and the optimization program, OPSET (which was used previous to but not in this study), requires 240K bytes. The data band for one small watershed requires approximately 32 tracks of disk storage.

  5. Episodic and Semantic Memories of a Residential Environmental Education Program

    ERIC Educational Resources Information Center

    Knapp, Doug; Benton, Gregory M.

    2006-01-01

    This study used a phenomenological approach to investigate the recollections of participants of an environmental education (EE) residential program. Ten students who participated in a residential EE program in the fall of 2001 were interviewed in the fall of 2002. Three major themes relating to the participants' long-term memory of the residential…

  6. 78 FR 37741 - Approval and Promulgation of Implementation Plans; California; South Coast; Contingency Measures...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-24

    .... Id. \\7\\ Consistent with EPA's definition of ``design value'' in 40 CFR 58.1, we use the term ``design... evaluations below. Carl Moyer Memorial Air Quality Standards Attainment Program The Contingency Measures SIP identifies a portion of the Carl Moyer Memorial Air Quality Standards Attainment Program (Carl Moyer Program...

  7. List Models of Procedure Learning

    NASA Technical Reports Server (NTRS)

    Matessa, Michael P.; Polson, Peter G.

    2005-01-01

    This paper presents a new theory of the initial stages of skill acquisition and then employs the theory to model current and future training programs for fight management systems (FMSs) in modern commercial airliners like the Boeing 777 and the Airbus A320. The theoretical foundations for the theory are a new synthesis of the literature on human memory and the latest version of the ACT-R theory of skill acquisition.

  8. Depressive Mood and Testosterone Related to Declarative Verbal Memory Decline in Middle-Aged Caregivers of Children with Eating Disorders.

    PubMed

    Romero-Martínez, Ángel; Ruiz-Robledillo, Nicolás; Moya-Albiol, Luis

    2016-03-04

    Caring for children diagnosed with a chronic psychological disorder such as an eating disorder (ED) can be used as a model of chronic stress. This kind of stress has been reported to have deleterious effects on caregivers' cognition, particularly in verbal declarative memory of women caregivers. Moreover, high depressive mood and variations in testosterone (T) levels moderate this cognitive decline. The purpose of this study was to characterize whether caregivers of individuals with EDs (n = 27) show declarative memory impairments compared to non-caregivers caregivers (n = 27), using for this purpose a standardized memory test (Rey's Auditory Verbal Learning Test). Its purpose was also to examine the role of depressive mood and T in memory decline. Results showed that ED caregivers presented high depressive mood, which was associated to worse verbal memory performance, especially in the case of women. In addition, all caregivers showed high T levels. Nonetheless, only in the case of women caregivers did T show a curvilinear relationship with verbal memory performance, meaning that the increases of T were associated to the improvement in verbal memory performance, but only up to a certain point, as after such point T continued to increase and memory performance decreased. Thus, chronic stress due to caregiving was associated to disturbances in mood and T levels, which in turn was associated to verbal memory decline. These findings should be taken into account in the implementation of intervention programs for helping ED caregivers cope with caregiving situations and to prevent the risk of a pronounced verbal memory decline.

  9. Effects of Cogmed working memory training on cognitive performance.

    PubMed

    Etherton, Joseph L; Oberle, Crystal D; Rhoton, Jayson; Ney, Ashley

    2018-04-16

    Research on the cognitive benefits of working memory training programs has produced inconsistent results. Such research has frequently used laboratory-specific training tasks, or dual-task n-back training. The current study used the commercial Cogmed Working Memory (WM) Training program, involving several different training tasks involving visual and auditory input. Healthy college undergraduates were assigned to either the full Cogmed training program of 25, 40-min training sessions; an abbreviated Cogmed program of 25, 20-min training sessions; or a no-contact control group. Pretest and posttest measures included multiple measures of attention, working memory, fluid intelligence, and executive functions. Although improvement was observed for the full training group for a digit span task, no training-related improvement was observed for any of the other measures. Results of the study suggest that WM training does not improve performance on unrelated tasks or enhance other cognitive abilities.

  10. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  11. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  12. Digital Equipment Corporation VAX/VMS Version 4.3

    DTIC Science & Technology

    1986-07-30

    operating system performs process-oriented paging that allows execution of programs that may be larger than the physical memory allocated to them... to higher privileged modes. (For an explanation of how the four access modes provide memory access protection see page 9, "Memory Management".) A... to optimize program performance for real-time applications or interactive environments. July 30, 1986 - 4 - Final Evaluation Report Digital VAX/VMS

  13. Reconfigurable photonic crystals enabled by pressure-responsive shape-memory polymers

    PubMed Central

    Fang, Yin; Ni, Yongliang; Leo, Sin-Yen; Taylor, Curtis; Basile, Vito; Jiang, Peng

    2015-01-01

    Smart shape-memory polymers can memorize and recover their permanent shape in response to an external stimulus (for example, heat). They have been extensively exploited for a wide spectrum of applications ranging from biomedical devices to aerospace morphing structures. However, most of the existing shape-memory polymers are thermoresponsive and their performance is hindered by heat-demanding programming and recovery steps. Although pressure is an easily adjustable process variable such as temperature, pressure-responsive shape-memory polymers are largely unexplored. Here we report a series of shape-memory polymers that enable unusual ‘cold' programming and instantaneous shape recovery triggered by applying a contact pressure at ambient conditions. Moreover, the interdisciplinary integration of scientific principles drawn from two disparate fields—the fast-growing photonic crystal and shape-memory polymer technologies—enables fabrication of reconfigurable photonic crystals and simultaneously provides a simple and sensitive optical technique for investigating the intriguing shape-memory effects at nanoscale. PMID:26074349

  14. Poly(Capro-Lactone) Networks as Actively Moving Polymers

    NASA Astrophysics Data System (ADS)

    Meng, Yuan

    Shape-memory polymers (SMPs), as a subset of actively moving polymers, form an exciting class of materials that can store and recover elastic deformation energy upon application of an external stimulus. Although engineering of SMPs nowadays has lead to robust materials that can memorize multiple temporary shapes, and can be triggered by various stimuli such as heat, light, moisture, or applied magnetic fields, further commercialization of SMPs is still constrained by the material's incapability to store large elastic energy, as well as its inherent one-way shape-change nature. This thesis develops a series of model semi-crystalline shape-memory networks that exhibit ultra-high energy storage capacity, with accurately tunable triggering temperature; by introducing a second competing network, or reconfiguring the existing network under strained state, configurational chain bias can be effectively locked-in, and give rise to two-way shape-actuators that, in the absence of an external load, elongates upon cooling and reversibly contracts upon heating. We found that well-defined network architecture plays essential role on strain-induced crystallization and on the performance of cold-drawn shape-memory polymers. Model networks with uniform molecular weight between crosslinks, and specified functionality of each net-point, results in tougher, more elastic materials with a high degree of crystallinity and outstanding shape-memory properties. The thermal behavior of the model networks can be finely modified by introducing non-crystalline small molecule linkers that effectively frustrates the crystallization of the network strands. This resulted in shape-memory networks that are ultra-sensitive to heat, as deformed materials can be efficiently triggered to revert to its permanent state upon only exposure to body temperature. We also coupled the same reaction adopted to create the model network with conventional free-radical polymerization to prepare a dual-cure "double network" that behaves as a real thermal "actuator". This approach places sub-chains under different degrees of configurational bias within the network to utilize the material's propensity to undergo stress-induced crystallization. Reconfiguration of model shape-memory networks containing photo-sensitive linkages can also be employed to program two-way actuator. Chain reshuffling of a partially reconfigurable network is initiated upon exposure to light under specific strains. Interesting photo-induced creep and stress relaxation behaviors were demonstrated and understood based on a novel transient network model we derived. In summary, delicate manipulation of shape-memory network architectures addressed critical issues constraining the application of this type of functional polymer material. Strategies developed in this thesis may provide new opportunity to the field of shape-memory polymers.

  15. Memory versus perception of body size in patients with anorexia nervosa and healthy controls.

    PubMed

    Øverås, Maria; Kapstad, Hilde; Brunborg, Cathrine; Landrø, Nils Inge; Lask, Bryan

    2014-03-01

    The objective of this study was to compare body size estimation based on memory versus perception, in patients with anorexia nervosa (AN) and healthy controls, adjusting for possible confounders. Seventy-one women (AN: 37, controls: 35), aged 14-29 years, were assessed with a computerized body size estimation morphing program. Information was gathered on depression, anxiety, time since last meal, weight and height. Results showed that patients overestimated their body size significantly more than controls, both in the memory and perception condition. Further, patients overestimated their body size significantly more when estimation was based on perception than memory. When controlling for anxiety, the difference between patients and controls no longer reached significance. None of the other confounders contributed significantly to the model. The results suggest that anxiety plays a role in overestimation of body size in AN. This finding might inform treatment, suggesting that more focus should be aimed at the underlying anxiety. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.

  16. Attentional Imbalances Following Head Injury

    DTIC Science & Technology

    1988-05-30

    on recent memory , the patient’s basic fund of knowledge ( semantic memory ) and remote memory for events ( episodic memory ) were both impaired. For e...are notable for problems caused by poor memory , inflexibility, and concreteness. These problems are most severe when they interact with linguistic...demands. It is worth noting that memory problems were accompanied by confabulation (hen the patient first entered the program. In addition to the effects

  17. A memory module for experimental data handling

    NASA Astrophysics Data System (ADS)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  18. Developing Memory Clinics in Primary Care: An Evidence-Based Interprofessional Program of Continuing Professional Development

    ERIC Educational Resources Information Center

    Lee, Linda; Weston, W. Wayne; Hillier, Loretta M.

    2013-01-01

    Introduction: Primary care is challenged to meet the needs of patients with dementia. A training program was developed to increase capacity for dementia care through the development of Family Health Team (FHT)-based interprofessional memory clinics. The interprofessional training program consisted of a 2-day workshop, 1-day observership, and 2-day…

  19. Atmospheric Photochemical Modeling of Turbine Engine Fuels and Exhausts. Phase 2. Computer Model Development. Volume 2

    DTIC Science & Technology

    1988-05-01

    represented name Emitted Organics Included in All Models CO Carbon Monoxide C:C, Ethene HCHO Formaldehyde CCHO Acetaldehyde RCHO Propionaldehyde and other...of species in the mixture, and for proper use of this program, these files should be "normalized," i.e., the number of carbons in the mixture should...scenario in memory. Valid parmtypes are SCEN, PHYS, CHEM, VP, NSP, OUTP, SCHEDS. LIST ALLCOMP Lists all available composition filenames. LIST ALLSCE

  20. SKIRT: Hybrid parallelization of radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.

    2017-07-01

    We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.

  1. Application of a microcomputer-based system to control and monitor bacterial growth.

    PubMed

    Titus, J A; Luli, G W; Dekleva, M L; Strohl, W R

    1984-02-01

    A modular microcomputer-based system was developed to control and monitor various modes of bacterial growth. The control system was composed of an Apple II Plus microcomputer with 64-kilobyte random-access memory; a Cyborg ISAAC model 91A multichannel analog-to-digital and digital-to-analog converter; paired MRR-1 pH, pO(2), and foam control units; and in-house-designed relay, servo control, and turbidimetry systems. To demonstrate the flexibility of the system, we grew bacteria under various computer-controlled and monitored modes of growth, including batch, turbidostat, and chemostat systems. The Apple-ISAAC system was programmed in Labsoft BASIC (extended Applesoft) with an average control program using ca. 6 to 8 kilobytes of memory and up to 30 kilobytes for datum arrays. This modular microcomputer-based control system was easily coupled to laboratory scale fermentors for a variety of fermentations.

  2. Application of a Microcomputer-Based System to Control and Monitor Bacterial Growth

    PubMed Central

    Titus, Jeffrey A.; Luli, Gregory W.; Dekleva, Michael L.; Strohl, William R.

    1984-01-01

    A modular microcomputer-based system was developed to control and monitor various modes of bacterial growth. The control system was composed of an Apple II Plus microcomputer with 64-kilobyte random-access memory; a Cyborg ISAAC model 91A multichannel analog-to-digital and digital-to-analog converter; paired MRR-1 pH, pO2, and foam control units; and in-house-designed relay, servo control, and turbidimetry systems. To demonstrate the flexibility of the system, we grew bacteria under various computer-controlled and monitored modes of growth, including batch, turbidostat, and chemostat systems. The Apple-ISAAC system was programmed in Labsoft BASIC (extended Applesoft) with an average control program using ca. 6 to 8 kilobytes of memory and up to 30 kilobytes for datum arrays. This modular microcomputer-based control system was easily coupled to laboratory scale fermentors for a variety of fermentations. PMID:16346462

  3. Final Report: Correctness Tools for Petascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    2014-10-27

    In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoringmore » of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.« less

  4. Cognitive Training Program to Improve Working Memory in Older Adults with MCI.

    PubMed

    Hyer, Lee; Scott, Ciera; Atkinson, Mary Michael; Mullen, Christine M; Lee, Anna; Johnson, Aaron; Mckenzie, Laura C

    2016-01-01

    Deficits in working memory (WM) are associated with age-related decline. We report findings from a clinical trial that examined the effectiveness of Cogmed, a computerized program that trains WM. We compare this program to a Sham condition in older adults with Mild Cognitive Impairment (MCI). Older adults (N = 68) living in the community were assessed. Participants reported memory impairment and met criteria for MCI, either by poor delayed memory or poor performance in other cognitive areas. The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS, Delayed Memory Index) and the Clinical Dementia Rating scale (CDR) were utilized. All presented with normal Mini Mental State Exams (MMSE) and activities of daily living (ADLs). Participants were randomized to Cogmed or a Sham computer program. Twenty-five sessions were completed over five to seven weeks. Pre, post, and follow-up measures included a battery of cognitive measures (three WM tests), a subjective memory scale, and a functional measure. Both intervention groups improved over time. Cogmed significantly outperformed Sham on Span Board and exceeded in subjective memory reports at follow-up as assessed by the Cognitive Failures Questionnaire (CFQ). The Cogmed group demonstrated better performance on the Functional Activities Questionnaire (FAQ), a measure of adjustment and far transfer, at follow-up. Both groups, especially Cogmed, enjoyed the intervention. Results suggest that WM was enhanced in both groups of older adults with MCI. Cogmed was better on one core WM measure and had higher ratings of satisfaction. The Sham condition declined on adjustment.

  5. Helicopter In-Flight Monitoring System Second Generation (HIMS II).

    DTIC Science & Technology

    1983-08-01

    acquisition cycle. B. Computer Chassis CPU (DEC LSI-II/2) -- Executes instructions contained in the memory. 32K memory (DEC MSVII-DD) --Contains program...when the operator executes command #2, 3, or 5 (display data). New cartridges can be inserted as required for truly unlimited, continuous data...is called bootstrapping. The software, which is stored on a tape cartridge, is loaded into memory by execution of a small program stored in read-only

  6. From pipelines to pathways: the Memorial experience in educating doctors for rural generalist practice.

    PubMed

    Rourke, James; Asghari, Shabnam; Hurley, Oliver; Ravalia, Mohamed; Jong, Michael; Parsons, Wanda; Duggan, Norah; Stringer, Katherine; O'Keefe, Danielle; Moffatt, Scott; Graham, Wendy; Sturge Sparkes, Carolyn; Hippe, Janelle; Harris Walsh, Kristin; McKay, Donald; Samarasena, Asoka

    2018-03-01

    This report describes the community context, concept and mission of The Faculty of Medicine at Memorial University of Newfoundland (Memorial), Canada, and its 'pathways to rural practice' approach, which includes influences at the pre-medical school, medical school experience, postgraduate residency training, and physician practice levels. Memorial's pathways to practice helped Memorial to fulfill its social accountability mandate to populate the province with highly skilled rural generalist practitioners. Programs/interventions/initiatives: The 'pathways to rural practice' include initiatives in four stages: (1) before admission to medical school; (2) during undergraduate medical training (medical degree (MD) program); (3) during postgraduate vocational residency training; and (4) after postgraduate vocational residency training. Memorial's Learners & Locations (L&L) database tracks students through these stages. The Aboriginal initiative - the MedQuest program and the admissions process that considers geographic or minority representation in terms of those selecting candidates and the candidates themselves - occurs before the student is admitted. Once a student starts Memorial's MD program, the student has ample opportunities to have rural-based experiences through pre-clerkship and clerkship, of which some take place exclusively outside of St. John's tertiary hospitals. Memorial's postgraduate (PG) Family Medicine (FM) residency (vocational) training program allows for deeper community integration and longer periods of training within the same community, which increases the likelihood of a physician choosing rural family medicine. After postgraduate training, rural physicians were given many opportunities for professional development as well as faculty development opportunities. Each of the programs and initiatives were assessed through geospatial rurality analysis of administrative data collected upon entry into and during the MD program and PG training (L&L). Among Memorial MD-graduating classes of 2011-2020, 56% spent the majority of their lives before their 18th birthday in a rural location and 44% in an urban location. As of September 2016, 23 Memorial MD students self-identified as Aboriginal, of which 2 (9%) were from an urban location and 20 (91%) were from rural locations. For Year 3 Family Medicine, graduating classes 2011 to 2019, 89% of placement weeks took place in rural communities and 8% took place in rural towns. For Memorial MD graduating classes 2011-2013 who completed Memorial Family Medicine vocational training residencies, (N=49), 100% completed some rural training. For these 49 residents (vocational trainees), the average amount of time spent in rural areas was 52 weeks out of a total average FM training time of 95 weeks. For Family Medicine residencies from July 2011 to October 2016, 29% of all placement weeks took place in rural communities and 21% of all placement weeks took place in rural towns. For 2016-2017 first-year residents, 53% of the first year training is completed in rural locations, reflecting an even greater rural experiential learning focus. Memorial's pathways approach has allowed for the comprehensive training of rural generalists for Newfoundland and Labrador and the rest of Canada and may be applicable to other settings. More challenges remain, requiring ongoing collaboration with governments, medical associations, health authorities, communities, and their physicians to help achieve reliable and feasible healthcare delivery for those living in rural and remote areas.

  7. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  8. Temperature and electrical memory of polymer fibers

    NASA Astrophysics Data System (ADS)

    Yuan, Jinkai; Zakri, Cécile; Grillard, Fabienne; Neri, Wilfrid; Poulin, Philippe

    2014-05-01

    We report in this work studies of the shape memory behavior of polymer fibers loaded with carbon nanotubes or graphene flakes. These materials exhibit enhanced shape memory properties with the generation of a giant stress upon shape recovery. In addition, they exhibit a surprising temperature memory with a peak of generated stress at a temperature nearly equal to the temperature of programming. This temperature memory is ascribed to the presence of dynamical heterogeneities and to the intrinsic broadness of the glass transition. We present recent experiments related to observables other than mechanical properties. In particular nanocomposite fibers exhibit variations of electrical conductivity with an accurate memory. Indeed, the rate of conductivity variations during temperature changes reaches a well defined maximum at a temperature equal to the temperature of programming. Such materials are promising for future actuators that couple dimensional changes with sensing electronic functionalities.

  9. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  10. A dual-trace model for visual sensory memory.

    PubMed

    Cappiello, Marcus; Zhang, Weiwei

    2016-11-01

    Visual sensory memory refers to a transient memory lingering briefly after the stimulus offset. Although previous literature suggests that visual sensory memory is supported by a fine-grained trace for continuous representation and a coarse-grained trace of categorical information, simultaneous separation and assessment of these traces can be difficult without a quantitative model. The present study used a continuous estimation procedure to test a novel mathematical model of the dual-trace hypothesis of visual sensory memory according to which visual sensory memory could be modeled as a mixture of 2 von Mises (2VM) distributions differing in standard deviation. When visual sensory memory and working memory (WM) for colors were distinguished using different experimental manipulations in the first 3 experiments, the 2VM model outperformed Zhang and Luck (2008) standard mixture model (SM) representing a mixture of a single memory trace and random guesses, even though SM outperformed 2VM for WM. Experiment 4 generalized 2VM's advantages of fitting visual sensory memory data over SM from color to orientation. Furthermore, a single trace model and 4 other alternative models were ruled out, suggesting the necessity and sufficiency of dual traces for visual sensory memory. Together these results support the dual-trace model of visual sensory memory and provide a preliminary inquiry into the nature of information loss from visual sensory memory to WM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Attention, Working Memory, and Long-Term Memory in Multimedia Learning: An Integrated Perspective Based on Process Models of Working Memory

    ERIC Educational Resources Information Center

    Schweppe, Judith; Rummer, Ralf

    2014-01-01

    Cognitive models of multimedia learning such as the Cognitive Theory of Multimedia Learning (Mayer 2009) or the Cognitive Load Theory (Sweller 1999) are based on different cognitive models of working memory (e.g., Baddeley 1986) and long-term memory. The current paper describes a working memory model that has recently gained popularity in basic…

  12. Mass Memory Storage Devices for AN/SLQ-32(V).

    DTIC Science & Technology

    1985-06-01

    tactical programs and libraries into the AN/UYK-19 computer , the RP-16 microprocessor, and other peripheral processors (e.g., ADLS and Band 1) will be...software must be loaded into computer memory from the 4-track magnetic tape cartridges (MTCs) on which the programs are stored. Program load begins...software. Future computer programs , which will reside in peripheral processors, include the Automated Decoy Launching System (ADLS) and Band 1. As

  13. Efficacy of the Ubiquitous Spaced Retrieval-based Memory Advancement and Rehabilitation Training (USMART) program among patients with mild cognitive impairment: a randomized controlled crossover trial.

    PubMed

    Han, Ji Won; Son, Kyung Lak; Byun, Hye Jin; Ko, Ji Won; Kim, Kayoung; Hong, Jong Woo; Kim, Tae Hyun; Kim, Ki Woong

    2017-06-06

    Spaced retrieval training (SRT) is a nonpharmacological intervention for mild cognitive impairment (MCI) and dementia that trains the learning and retention of target information by recalling it over increasingly long intervals. We recently developed the Ubiquitous Spaced Retrieval-based Memory Advancement and Rehabilitation Training (USMART) program as a convenient, self-administered tablet-based SRT program. We also demonstrated the utility of USMART for improving memory in individuals with MCI through an open-label uncontrolled trial. This study had an open-label, single-blind, randomized, controlled, two-period crossover design. Fifty patients with MCI were randomized into USMART-usual care and usual care-USMART treatment sequences. USMART was completed or usual care was provided biweekly over a 4-week treatment period with a 2-week washout period between treatment periods. Primary outcome measures included the Word List Memory Test, Word List Recall Test (WLRT), and Word List Recognition Test. Outcomes were measured at baseline, week 5, and week 11 by raters who were blinded to intervention type. An intention-to-treat analysis and linear mixed modeling were used. Of 50 randomized participants, 41 completed the study (18% dropout rate). The USMART group had larger improvements in WLRT score (effect size = 0.49, p = 0.031) than the usual care group. There were no significant differences in other primary or secondary measures between the USMART and usual care groups. Moreover, no USMART-related adverse events were reported. The 4-week USMART modestly improved information retrieval in older people with MCI, and was well accepted with minimal technical support. ClinicalTrials.gov NCT01688128 . Registered 12 September 2012.

  14. Memory training interventions for older adults: a meta-analysis.

    PubMed

    Gross, Alden L; Parisi, Jeanine M; Spira, Adam P; Kueider, Alexandra M; Ko, Jean Y; Saczynski, Jane S; Samus, Quincy M; Rebok, George W

    2012-01-01

    A systematic review and meta-analysis of memory training research was conducted to characterize the effect of memory strategies on memory performance among cognitively intact, community-dwelling older adults, and to identify characteristics of individuals and of programs associated with improved memory. The review identified 402 publications, of which 35 studies met criteria for inclusion. The overall effect size estimate, representing the mean standardized difference in pre-post change between memory-trained and control groups, was 0.31 standard deviations (SD; 95% confidence interval (CI): 0.22, 0.39). The pre-post training effect for memory-trained interventions was 0.43 SD (95% CI: 0.29, 0.57) and the practice effect for control groups was 0.06 SD (95% CI: 0.05, 0.16). Among 10 distinct memory strategies identified in studies, meta-analytic methods revealed that training multiple strategies was associated with larger training gains (p=0.04), although this association did not reach statistical significance after adjusting for multiple comparisons. Treatment gains among memory-trained individuals were not better after training in any particular strategy, or by the average age of participants, session length, or type of control condition. These findings can inform the design of future memory training programs for older adults.

  15. Development of memory CD8+ T cells and their recall responses during blood-stage infection with Plasmodium berghei ANKA.

    PubMed

    Miyakoda, Mana; Kimura, Daisuke; Honma, Kiri; Kimura, Kazumi; Yuda, Masao; Yui, Katsuyuki

    2012-11-01

    Conditions required for establishing protective immune memory vary depending on the infecting microbe. Although the memory immune response against malaria infection is generally thought to be relatively slow to develop and can be lost rapidly, experimental evidence is insufficient. In this report, we investigated the generation, maintenance, and recall responses of Ag-specific memory CD8(+) T cells using Plasmodium berghei ANKA expressing OVA (PbA-OVA) as a model system. Mice were transferred with OVA-specific CD8(+) T (OT-I) cells and infected with PbA-OVA or control Listeria monocytogenes expressing OVA (LM-OVA). Central memory type OT-I cells were maintained for >2 mo postinfection and recovery from PbA-OVA. Memory OT-I cells produced IFN-γ as well as TNF-α upon activation and were protective against challenge with a tumor expressing OVA, indicating that functional memory CD8(+) T cells can be generated and maintained postinfection with P. berghei ANKA. Cotransfer of memory OT-I cells with naive OT-I cells to mice followed by infection with PbA-OVA or LM-OVA revealed that clonal expansion of memory OT-I cells was limited during PbA-OVA infection compared with expansion of naive OT-I cells, whereas it was more rapid during LM-OVA infection. The expression of inhibitory receptors programmed cell death-1 and LAG-3 was higher in memory-derived OT-I cells than naive-derived OT-I cells during infection with PbA-OVA. These results suggest that memory CD8(+) T cells can be established postinfection with P. berghei ANKA, but their recall responses during reinfection are more profoundly inhibited than responses of naive CD8(+) T cells.

  16. Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Alruwaili, Manal

    With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.

  17. Effects of a multimedia project on users' knowledge about normal forgetting and serious memory loss.

    PubMed

    Mahoney, Diane Feeney; Tarlow, Barbara J; Jones, Richard N; Sandaire, Johnny

    2002-01-01

    The aim of the project was to develop and evaluate the effectiveness of a CD-ROM-based multimedia program as a tool to increase user's knowledge about the differences between "normal" forgetfulness and more serious memory loss associated with Alzheimer's disease. The research was a controlled randomized study conducted with 113 adults who were recruited from the community and who expressed a concern about memory loss in a family member. The intervention group (n=56) viewed a module entitled "Forgetfulness: What's Normal and What's Not" on a laptop computer in their homes; the control group (n=57) did not. Both groups completed a 25-item knowledge-about-memory-loss test (primary outcome) and a sociodemographic and technology usage questionnaire; the intervention group also completed a CD-ROM user's evaluation. The mean (SD) number of correct responses to the knowledge test was 14.2 (4.5) for controls and 19.7 (3.1) for intervention participants. This highly significant difference (p<0.001) corresponds to a very large effect size. The program was most effective for participants with a lower level of self-reported prior knowledge about memory loss and Alzheimer's disease (p=0.02). Viewers were very satisfied with the program and felt that it was easy to use and understand. They particularly valued having personal access to a confidential source that permitted them to become informed about memory loss without public disclosure. This multimedia CD-ROM technology program provides an efficient and effective means of teaching older adults about memory loss and ways to distinguish benign from serious memory loss. It uniquely balances public community outreach education and personal privacy.

  18. Fabrication of InGaZnO Nonvolatile Memory Devices at Low Temperature of 150 degrees C for Applications in Flexible Memory Displays and Transparency Coating on Plastic Substrates.

    PubMed

    Hanh, Nguyen Hong; Jang, Kyungsoo; Yi, Junsin

    2016-05-01

    We directly deposited amorphous InGaZnO (a-IGZO) nonvolatile memory (NVM) devices with oxynitride-oxide-dioxide (OOO) stack structures on plastic substrate by a DC pulsed magnetron sputtering and inductively coupled plasma chemical vapor deposition (ICPCVD) system, using a low-temperature of 150 degrees C. The fabricated bottom gate a-IGZO NVM devices have a wide memory window with a low operating voltage during programming and erasing, due to an effective control of the gate dielectrics. In addition, after ten years, the memory device retains a memory window of over 73%, with a programming duration of only 1 ms. Moreover, the a-IGZO films show high optical transmittance of over 85%, and good uniformity with a root mean square (RMS) roughness of 0.26 nm. This film is a promising candidate to achieve flexible displays and transparency on plastic substrates because of the possibility of low-temperature deposition, and the high transparent properties of a-IGZO films. These results demonstrate that the a-IGZO NVM devices obtained at low-temperature have a suitable programming and erasing efficiency for data storage under low-voltage conditions, in combination with excellent charge retention characteristics, and thus show great potential application in flexible memory displays.

  19. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gebis, Joseph; Oliker, Leonid; Shalf, John

    The disparity between microprocessor clock frequencies and memory latency is a primary reason why many demanding applications run well below peak achievable performance. Software controlled scratchpad memories, such as the Cell local store, attempt to ameliorate this discrepancy by enabling precise control over memory movement; however, scratchpad technology confronts the programmer and compiler with an unfamiliar and difficult programming model. In this work, we present the Virtual Vector Architecture (ViVA), which combines the memory semantics of vector computers with a software-controlled scratchpad memory in order to provide a more effective and practical approach to latency hiding. ViVA requires minimal changesmore » to the core design and could thus be easily integrated with conventional processor cores. To validate our approach, we implemented ViVA on the Mambo cycle-accurate full system simulator, which was carefully calibrated to match the performance on our underlying PowerPC Apple G5 architecture. Results show that ViVA is able to deliver significant performance benefits over scalar techniques for a variety of memory access patterns as well as two important memory-bound compact kernels, corner turn and sparse matrix-vector multiplication -- achieving 2x-13x improvement compared the scalar version. Overall, our preliminary ViVA exploration points to a promising approach for improving application performance on leading microprocessors with minimal design and complexity costs, in a power efficient manner.« less

  1. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  2. Project Golden Gate: towards real-time Java in space missions

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel; Bollella, Greg; Canham, Tim; Carson, Vanessa; Champlin, Virgil; Giovannoni, Brian; Indictor, Mark; Meyer, Kenny; Murray, Alex; Reinholtz, Kirk

    2004-01-01

    This paper describes the problem domain and our experimentation with the first commercial implementation of the Real Time Specification for Java. The two main issues explored in this report are: (1) the effect of RTSJ's non-heap memory on the programming model, and (2) performance benchmarking of RTSJ/Linux relative to C++/VxWorks.

  3. How autobiographical memories can support episodic recall: transfer and maintenance effect of memory training with old-old low-autonomy adults.

    PubMed

    Carretti, Barbara; Facchini, Giulia; Nicolini, Chiara

    2011-02-01

    A large body of research has demonstrated that, although specific memory activities can enhance the memory performance of healthy older adults, the extent of the increment is negatively associated with age. Conversely, few studies have examined the case of healthy elderly people not living alone. This study has two mains goals: to understand whether older adults with limited autonomy can benefit from activities devoted to increasing their episodic memory performance, and to test the efficacy of a memory training program based on autobiographical memories, in terms of transfer and maintenance effect. We postulated that being able to rely on stable autobiographical memories (intrinsically associated with emotions) would be a valuable memory aid. Memory training was given to healthy older adults (aged 75-85) living in a retirement home. Two programs were compared: in the first, participants were primed to recall autobiographical memories around certain themes, and then to complete a set of episodic memory tasks (experimental group); in the second, participants were only given the episodic tasks (control group). Both groups improved their performance from pre- to post-test. However, the experimental group reported a greater feeling of well-being after the training, and maintained the training gains relating to episodic performance after three months. Our findings suggest that specific memory activities are beneficial to elderly people living in a retirement home context. In addition, training based on reactivation of autobiographical memories is shown to produce a long-lasting effect on memory performance.

  4. Design and Implementation of an MC68020-Based Educational Computer Board

    DTIC Science & Technology

    1989-12-01

    device and the other for a Macintosh personal computer. A stored program can be installed in 8K bytes Programmable Read Only Memory (PROM) to initialize...MHz. It includes four * Static Random Access Memory (SRAM) chips which provide a storage of 32K bytes. Two Programmable Array Logic (PAL) chips...device and the other for a Macintosh personal computer. A stored program can be installed in 8K bytes Programmable Read Only Memory (PROM) to

  5. Effect with high density nano dot type storage layer structure on 20 nm planar NAND flash memory characteristics

    NASA Astrophysics Data System (ADS)

    Sasaki, Takeshi; Muraguchi, Masakazu; Seo, Moon-Sik; Park, Sung-kye; Endoh, Tetsuo

    2014-01-01

    The merits, concerns and design principle for the future nano dot (ND) type NAND flash memory cell are clarified, by considering the effect of storage layer structure on NAND flash memory characteristics. The characteristics of the ND cell for a NAND flash memory in comparison with the floating gate type (FG) is comprehensively studied through the read, erase, program operation, and the cell to cell interference with device simulation. Although the degradation of the read throughput (0.7% reduction of the cell current) and slower program time (26% smaller programmed threshold voltage shift) with high density (10 × 1012 cm-2) ND NAND are still concerned, the suppress of the cell to cell interference with high density (10 × 1012 cm-2) plays the most important part for scaling and multi-level cell (MLC) operation in comparison with the FG NAND. From these results, the design knowledge is shown to require the control of the number of nano dots rather than the higher nano dot density, from the viewpoint of increasing its memory capacity by MLC operation and suppressing threshold voltage variability caused by the number of dots in the storage layer. Moreover, in order to increase its memory capacity, it is shown the tunnel oxide thickness with ND should be designed thicker (>3 nm) than conventional designed ND cell for programming/erasing with direct tunneling mechanism.

  6. Asymmetric programming: a highly reliable metadata allocation strategy for MLC NAND flash memory-based sensor systems.

    PubMed

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-10-10

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  7. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    PubMed Central

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  8. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  9. Order-memory and association-memory.

    PubMed

    Caplan, Jeremy B

    2015-09-01

    Two highly studied memory functions are memory for associations (items presented in pairs, such as SALT-PEPPER) and memory for order (a list of items whose order matters, such as a telephone number). Order- and association-memory are at the root of many forms of behaviour, from wayfinding, to language, to remembering people's names. Most researchers have investigated memory for order separately from memory for associations. Exceptions to this, associative-chaining models build an ordered list from associations between pairs of items, quite literally understanding association- and order-memory together. Alternatively, positional-coding models have been used to explain order-memory as a completely distinct function from association-memory. Both classes of model have found empirical support and both have faced serious challenges. I argue that models that combine both associative chaining and positional coding are needed. One such hybrid model, which relies on brain-activity rhythms, is promising, but remains to be tested rigourously. I consider two relatively understudied memory behaviours that demand a combination of order- and association-information: memory for the order of items within associations (is it William James or James William?) and judgments of relative order (who left the party earlier, Hermann or William?). Findings from these underexplored procedures are already difficult to reconcile with existing association-memory and order-memory models. Further work with such intermediate experimental paradigms has the potential to provide powerful findings to constrain and guide models into the future, with the aim of explaining a large range of memory functions, encompassing both association- and order-memory. (c) 2015 APA, all rights reserved).

  10. Predictors of change in life skills in schizophrenia after cognitive remediation.

    PubMed

    Kurtz, Matthew M; Seltzer, James C; Fujimoto, Marco; Shagan, Dana S; Wexler, Bruce E

    2009-02-01

    Few studies have investigated predictors of response to cognitive remediation interventions in patients with schizophrenia. Predictor studies to date have selected treatment outcome measures that were either part of the remediation intervention itself or closely linked to the intervention with few studies investigating factors that predict generalization to measures of everyday life-skills as an index of treatment-related improvement. In the current study we investigated the relationship between four measures of neurocognitive function, crystallized verbal ability, auditory sustained attention and working memory, verbal learning and memory, and problem-solving, two measures of symptoms, total positive and negative symptoms, and the process variables of treatment intensity and duration, to change on a performance-based measure of everyday life-skills after a year of computer-assisted cognitive remediation offered as part of intensive outpatient rehabilitation treatment. Thirty-six patients with schizophrenia or schizoaffective disorder were studied. Results of a linear regression model revealed that auditory attention and working memory predicted a significant amount of the variance in change in performance-based measures of everyday life skills after cognitive remediation, even when variance for all other neurocognitive variables in the model was controlled. Stepwise regression revealed that auditory attention and working memory predicted change in everyday life-skills across the trial even when baseline life-skill scores, symptoms and treatment process variables were controlled. These findings emphasize the importance of sustained auditory attention and working memory for benefiting from extended programs of cognitive remediation.

  11. A New Conceptualization of Human Visual Sensory-Memory.

    PubMed

    Öğmen, Haluk; Herzog, Michael H

    2016-01-01

    Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.

  12. Is random access memory random?

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.

  13. Exascale Hardware Architectures Working Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemmert, S; Ang, J; Chiang, P

    2011-03-15

    The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared tomore » memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is relatively immediate, as there is only a small window of opportunity to influence hardware design for 2018 machines. Given the short timeline a firm co-design methodology with vendors is of prime importance.« less

  14. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  15. QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation

    NASA Astrophysics Data System (ADS)

    Samana, A. R.; Krmpotić, F.; Bertulani, C. A.

    2010-06-01

    A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.

  16. An essential memory trace found.

    PubMed

    Thompson, Richard F

    2013-10-01

    I argue here that we have succeeded in localizing an essential memory trace for a basic form of associative learning and memory--classical conditioning of discrete responses learned with an aversive stimulus--to the anterior interpositus nucleus of the cerebellum. We first identified the entire essential circuit, using eyelid conditioning as the model system, and used reversible inactivation, during training, of critical structures and pathways to localize definitively the essential memory trace. In recognition of the 30th anniversary of Behavioral Neuroscience, I highlight 1 paper (Tracy, Thompson, Krupa, & Thompson, 1998) that was particularly significant for the progress of this research program. In this review, I present definitive evidence that the essential memory trace for eyelid conditioning is localized to the cerebellum and to no other part of the essential circuit, using electrical stimulation of the pontine nuclei-mossy fibers projecting to the cerebellum as the conditional stimulus (CS; it proved to be a supernormal stimulus resulting in much faster learning than with any peripheral CS) and using an electrical stimulus to the output of the cerebellum as a test, which did not change. Pontine patterns of projection to the cerebellum were confirmed with retrograde labeling techniques. 2013 APA, all rights reserved

  17. Revisiting the continuum hypothesis: toward an in-depth exploration of executive functions in korsakoff syndrome.

    PubMed

    Brion, Mélanie; Pitel, Anne-Lise; Beaunieux, Hélène; Maurage, Pierre

    2014-01-01

    Korsakoff syndrome (KS) is a neurological state mostly caused by alcohol-dependence and leading to disproportionate episodic memory deficits. KS patients present more severe anterograde amnesia than Alcohol-Dependent Subjects (ADS), which led to the continuum hypothesis postulating a progressive increase in brain and cognitive damages during the evolution from ADS to KS. This hypothesis has been extensively examined for memory but is still debated for other abilities, notably executive functions (EF). EF have up to now been explored by unspecific tasks in KS, and few studies explored their interactions with memory. Exploring EF in KS by specific tasks based on current EF models could thus renew the exploration of the continuum hypothesis. This paper will propose a research program aiming at: (1) clarifying the extent of executive dysfunctions in KS by tasks focusing on specific EF subcomponents; (2) determining the differential EF deficits in ADS and KS; (3) exploring EF-memory interactions in KS with innovative tasks. At the fundamental level, this exploration will test the continuum hypothesis beyond memory. At the clinical level, it will propose new rehabilitation tools focusing on the EF specifically impaired in KS.

  18. Revisiting the Continuum Hypothesis: Toward an In-Depth Exploration of Executive Functions in Korsakoff Syndrome

    PubMed Central

    Brion, Mélanie; Pitel, Anne-Lise; Beaunieux, Hélène; Maurage, Pierre

    2014-01-01

    Korsakoff syndrome (KS) is a neurological state mostly caused by alcohol-dependence and leading to disproportionate episodic memory deficits. KS patients present more severe anterograde amnesia than Alcohol-Dependent Subjects (ADS), which led to the continuum hypothesis postulating a progressive increase in brain and cognitive damages during the evolution from ADS to KS. This hypothesis has been extensively examined for memory but is still debated for other abilities, notably executive functions (EF). EF have up to now been explored by unspecific tasks in KS, and few studies explored their interactions with memory. Exploring EF in KS by specific tasks based on current EF models could thus renew the exploration of the continuum hypothesis. This paper will propose a research program aiming at: (1) clarifying the extent of executive dysfunctions in KS by tasks focusing on specific EF subcomponents; (2) determining the differential EF deficits in ADS and KS; (3) exploring EF-memory interactions in KS with innovative tasks. At the fundamental level, this exploration will test the continuum hypothesis beyond memory. At the clinical level, it will propose new rehabilitation tools focusing on the EF specifically impaired in KS. PMID:25071526

  19. A fast and low-power microelectromechanical system-based non-volatile memory device

    PubMed Central

    Lee, Sang Wook; Park, Seung Joo; Campbell, Eleanor E. B.; Park, Yung Woo

    2011-01-01

    Several new generation memory devices have been developed to overcome the low performance of conventional silicon-based flash memory. In this study, we demonstrate a novel non-volatile memory design based on the electromechanical motion of a cantilever to provide fast charging and discharging of a floating-gate electrode. The operation is demonstrated by using an electromechanical metal cantilever to charge a floating gate that controls the charge transport through a carbon nanotube field-effect transistor. The set and reset currents are unchanged after more than 11 h constant operation. Over 500 repeated programming and erasing cycles were demonstrated under atmospheric conditions at room temperature without degradation. Multinary bit programming can be achieved by varying the voltage on the cantilever. The operation speed of the device is faster than a conventional flash memory and the power consumption is lower than other memory devices. PMID:21364559

  20. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing

    PubMed Central

    Yang, Changju; Kim, Hyongsuk

    2016-01-01

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186

  1. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.

    PubMed

    Yang, Changju; Kim, Hyongsuk

    2016-08-19

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.

  2. A hot hole-programmed and low-temperature-formed SONOS flash memory

    PubMed Central

    2013-01-01

    In this study, a high-performance TixZrySizO flash memory is demonstrated using a sol–gel spin-coating method and formed under a low annealing temperature. The high-efficiency charge storage layer is formed by depositing a well-mixed solution of titanium tetrachloride, silicon tetrachloride, and zirconium tetrachloride, followed by 60 s of annealing at 600°C. The flash memory exhibits a noteworthy hot hole trapping characteristic and excellent electrical properties regarding memory window, program/erase speeds, and charge retention. At only 6-V operation, the program/erase speeds can be as fast as 120:5.2 μs with a 2-V shift, and the memory window can be up to 8 V. The retention times are extrapolated to 106 s with only 5% (at 85°C) and 10% (at 125°C) charge loss. The barrier height of the TixZrySizO film is demonstrated to be 1.15 eV for hole trapping, through the extraction of the Poole-Frenkel current. The excellent performance of the memory is attributed to high trapping sites of the low-temperature-annealed, high-κ sol–gel film. PMID:23899050

  3. Aerospace Ground Equipment for model 4080 sequence programmer. A standard computer terminal is adapted to provide convenient operator to device interface

    NASA Technical Reports Server (NTRS)

    Nissley, L. E.

    1979-01-01

    The Aerospace Ground Equipment (AGE) provides an interface between a human operator and a complete spaceborne sequence timing device with a memory storage program. The AGE provides a means for composing, editing, syntax checking, and storing timing device programs. The AGE is implemented with a standard Hewlett-Packard 2649A terminal system and a minimum of special hardware. The terminal's dual tape interface is used to store timing device programs and to read in special AGE operating system software. To compose a new program for the timing device the keyboard is used to fill in a form displayed on the screen.

  4. Optical read/write memory system components

    NASA Technical Reports Server (NTRS)

    Kozma, A.

    1972-01-01

    The optical components of a breadboard holographic read/write memory system have been fabricated and the parameters specified of the major system components: (1) a laser system; (2) an x-y beam deflector; (3) a block data composer; (4) the read/write memory material; (5) an output detector array; and (6) the electronics to drive, synchronize, and control all system components. The objectives of the investigation were divided into three concurrent phases: (1) to supply and fabricate the major components according to the previously established specifications; (2) to prepare computer programs to simulate the entire holographic memory system so that a designer can balance the requirements on the various components; and (3) to conduct a development program to optimize the combined recording and reconstruction process of the high density holographic memory system.

  5. Memory rehabilitation for the working memory of patients with multiple sclerosis (MS).

    PubMed

    Mousavi, Shokoufeh; Zare, Hossein; Etemadifar, Masoud; Taher Neshatdoost, Hamid

    2018-05-01

    The main cognitive impairments in multiple sclerosis (MS) affect the working memory, processing speed, and performances that are in close interaction with one another. Cognitive problems in MS are influenced to a lesser degree by disease recovery medications or treatments,but cognitive rehabilitation is considered one of the promising methods for cure. There is evidence regarding the effectiveness of cognitive rehabilitation for MS patients in various stages of the disease. Since the impairment in working memory is one of the main MS deficits, a particular training that affects this cognitive domain can be of a great value. This study aims to determine the effectiveness of memory rehabilitation on the working memory performance of MS patients. Sixty MS patients with cognitive impairment and similar in terms of demographic characteristics, duration of disease, neurological problems, and mental health were randomly assigned to three groups: namely, experimental, placebo, and control. Patients' cognitive evaluation incorporated baseline assessments immediately post-intervention and 5 weeks post-intervention. The experimental group received a cognitive rehabilitation program in one-hour sessions on a weekly basis for 8 weeks. The placebo group received relaxation techniques on a weekly basis; the control group received no intervention. The results of this study showed that the cognitive rehabilitation program had a positive effect on the working memory performance of patients with MS in the experimental group. These results were achieved in immediate evaluation (post-test) and follow-up 5 weeks after intervention. There was no significant difference in working memory performance between the placebo group and the control group. According to the study, there is evidence for the effectiveness of a memory rehabilitation program for the working memory of patients with MS. Cognitive rehabilitation can improve working memory disorders and have a positive effect on the working memory performance of these patients.

  6. Cognitive remediation therapy (CRT) benefits more to patients with schizophrenia with low initial memory performances.

    PubMed

    Pillet, Benoit; Morvan, Yannick; Todd, Aurelia; Franck, Nicolas; Duboc, Chloé; Grosz, Aimé; Launay, Corinne; Demily, Caroline; Gaillard, Raphaël; Krebs, Marie-Odile; Amado, Isabelle

    2015-01-01

    Cognitive deficits in schizophrenia mainly affect memory, attention and executive functions. Cognitive remediation is a technique derived from neuropsychology, which aims to improve or compensate for these deficits. Working memory, verbal learning, and executive functions are crucial factors for functional outcome. Our purpose was to assess the impact of the cognitive remediation therapy (CRT) program on cognitive difficulties in patients with schizophrenia, especially on working memory, verbal memory, and cognitive flexibility. We collected data from clinical and neuropsychological assessments in 24 patients suffering from schizophrenia (Diagnostic and Statistical Manual of mental Disorders-Fourth Edition, DSM-IV) who followed a 3-month (CRT) program. Verbal and visuo-spatial working memory, verbal memory, and cognitive flexibility were assessed before and after CRT. The Wilcoxon test showed significant improvements on the backward digit span, on the visual working memory span, on verbal memory and on flexibility. Cognitive improvement was substantial when baseline performance was low, independently from clinical benefit. CRT is effective on crucial cognitive domains and provides a huge benefit for patients having low baseline performance. Such cognitive amelioration appears highly promising for improving the outcome in cognitively impaired patients.

  7. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  8. A simplified computational memory model from information processing.

    PubMed

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  9. Computer viruses

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    The worm, Trojan horse, bacterium, and virus are destructive programs that attack information stored in a computer's memory. Virus programs, which propagate by incorporating copies of themselves into other programs, are a growing menace in the late-1980s world of unprotected, networked workstations and personal computers. Limited immunity is offered by memory protection hardware, digitally authenticated object programs,and antibody programs that kill specific viruses. Additional immunity can be gained from the practice of digital hygiene, primarily the refusal to use software from untrusted sources. Full immunity requires attention in a social dimension, the accountability of programmers.

  10. Human interferon and its inducers: clinical program overview at Roswell Park Memorial Institute.

    PubMed

    Carter, W A; Horoszewicz, J S

    1978-11-01

    An overview of the clinical interferon program at Roswell Park Memorial Institute is presented. Purified fibroblast interferon and a novel inducer of human interferon [rIn-r(C12,U)n] are being evaluated for possible antiviral, antiproliferative, and immunomodulatory activities in patients with cancer.

  11. Data systems and computer science space data systems: Onboard memory and storage

    NASA Technical Reports Server (NTRS)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  12. GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL

    NASA Technical Reports Server (NTRS)

    Johnson, D.

    1994-01-01

    The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.

  13. A New Conceptualization of Human Visual Sensory-Memory

    PubMed Central

    Öğmen, Haluk; Herzog, Michael H.

    2016-01-01

    Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory. PMID:27375519

  14. Programmable, reversible and repeatable wrinkling of shape memory polymer thin films on elastomeric substrates for smart adhesion.

    PubMed

    Wang, Yu; Xiao, Jianliang

    2017-08-09

    Programmable, reversible and repeatable wrinkling of shape memory polymer (SMP) thin films on elastomeric polydimethylsiloxane (PDMS) substrates is realized, by utilizing the heat responsive shape memory effect of SMPs. The dependencies of wrinkle wavelength and amplitude on program strain and SMP film thickness are shown to agree with the established nonlinear buckling theory. The wrinkling is reversible, as the wrinkled SMP thin film can be recovered to the flat state by heating up the bilayer system. The programming cycle between wrinkle and flat is repeatable, and different program strains can be used in different programming cycles to induce different surface morphologies. Enabled by the programmable, reversible and repeatable SMP film wrinkling on PDMS, smart, programmable surface adhesion with large tuning range is demonstrated.

  15. A Temporal Ratio Model of Memory

    ERIC Educational Resources Information Center

    Brown, Gordon D. A.; Neath, Ian; Chater, Nick

    2007-01-01

    A model of memory retrieval is described. The model embodies four main claims: (a) temporal memory--traces of items are represented in memory partly in terms of their temporal distance from the present; (b) scale-similarity--similar mechanisms govern retrieval from memory over many different timescales; (c) local distinctiveness--performance on a…

  16. The reduction of adult neurogenesis in depression impairs the retrieval of new as well as remote episodic memory

    PubMed Central

    Fang, Jing; Demic, Selver; Cheng, Sen

    2018-01-01

    Major depressive disorder (MDD) is associated with an impairment of episodic memory, but the mechanisms underlying this deficit remain unclear. Animal models of MDD find impaired adult neurogenesis (AN) in the dentate gyrus (DG), and AN in DG has been suggested to play a critical role in reducing the interference between overlapping memories through pattern separation. Here, we study the effect of reduced AN in MDD on the accuracy of episodic memory using computational modeling. We focus on how memory is affected when periods with a normal rate of AN (asymptomatic states) alternate with periods with a low rate (depressive episodes), which has never been studied before. Also, unlike previous models of adult neurogenesis, which consider memories as static patterns, we model episodic memory as sequences of neural activity patterns. In our model, AN adds additional random components to the memory patterns, which results in the decorrelation of similar patterns. Consistent with previous studies, higher rates of AN lead to higher memory accuracy in our model, which implies that memories stored in the depressive state are impaired. Intriguingly, our model makes the novel prediction that memories stored in an earlier asymptomatic state are also impaired by a later depressive episode. This retrograde effect exacerbates with increased duration of the depressive episode. Finally, pattern separation at the sensory processing stage does not improve, but rather worsens, the accuracy of episodic memory retrieval, suggesting an explanation for why AN is found in brain areas serving memory rather than sensory function. In conclusion, while cognitive retrieval biases might contribute to episodic memory deficits in MDD, our model suggests a mechanistic explanation that affects all episodic memories, regardless of emotional relevance. PMID:29879169

  17. The reduction of adult neurogenesis in depression impairs the retrieval of new as well as remote episodic memory.

    PubMed

    Fang, Jing; Demic, Selver; Cheng, Sen

    2018-01-01

    Major depressive disorder (MDD) is associated with an impairment of episodic memory, but the mechanisms underlying this deficit remain unclear. Animal models of MDD find impaired adult neurogenesis (AN) in the dentate gyrus (DG), and AN in DG has been suggested to play a critical role in reducing the interference between overlapping memories through pattern separation. Here, we study the effect of reduced AN in MDD on the accuracy of episodic memory using computational modeling. We focus on how memory is affected when periods with a normal rate of AN (asymptomatic states) alternate with periods with a low rate (depressive episodes), which has never been studied before. Also, unlike previous models of adult neurogenesis, which consider memories as static patterns, we model episodic memory as sequences of neural activity patterns. In our model, AN adds additional random components to the memory patterns, which results in the decorrelation of similar patterns. Consistent with previous studies, higher rates of AN lead to higher memory accuracy in our model, which implies that memories stored in the depressive state are impaired. Intriguingly, our model makes the novel prediction that memories stored in an earlier asymptomatic state are also impaired by a later depressive episode. This retrograde effect exacerbates with increased duration of the depressive episode. Finally, pattern separation at the sensory processing stage does not improve, but rather worsens, the accuracy of episodic memory retrieval, suggesting an explanation for why AN is found in brain areas serving memory rather than sensory function. In conclusion, while cognitive retrieval biases might contribute to episodic memory deficits in MDD, our model suggests a mechanistic explanation that affects all episodic memories, regardless of emotional relevance.

  18. Final Project Report. Scalable fault tolerance runtime technology for petascale computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Sadayappan, P

    With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been amore » considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.« less

  19. Memory operation mechanism of fullerene-containing polymer memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, Anri, E-mail: anakajima@hiroshima-u.ac.jp; Fujii, Daiki

    2015-03-09

    The memory operation mechanism in fullerene-containing nanocomposite gate insulators was investigated while varying the kind of fullerene in a polymer gate insulator. It was cleared what kind of traps and which positions in the nanocomposite the injected electrons or holes are stored in. The reason for the difference in the easiness of programming was clarified taking the role of the charging energy of an injected electron into account. The dependence of the carrier dynamics on the kind of fullerene molecule was investigated. A nonuniform distribution of injected carriers occurred after application of a large magnitude programming voltage due to themore » width distribution of the polystyrene barrier between adjacent fullerene molecules. Through the investigations, we demonstrated a nanocomposite gate with fullerene molecules having excellent retention characteristics and a programming capability. This will lead to the realization of practical organic memories with fullerene-containing polymer nanocomposites.« less

  20. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  1. Effectiveness of Working Memory Training among Subjects Currently on Sick Leave Due to Complex Symptoms.

    PubMed

    Aasvik, Julie K; Woodhouse, Astrid; Stiles, Tore C; Jacobsen, Henrik B; Landmark, Tormod; Glette, Mari; Borchgrevink, Petter C; Landrø, Nils I

    2016-01-01

    Introduction: The current study examined if adaptive working memory training (Cogmed QM) has the potential to improve inhibitory control, working memory capacity, and perceptions of memory functioning in a group of patients currently on sick leave due to symptoms of pain, insomnia, fatigue, depression and anxiety. Participants who were referred to a vocational rehabilitation center volunteered to take part in the study. Methods: Participants were randomly assigned to either a training condition ( N = 25) or a control condition ( N = 29). Participants in the training condition received working memory training in addition to the clinical intervention offered as part of the rehabilitation program, while participants in the control condition received treatment as usual i.e., the rehabilitation program only. Inhibitory control was measured by The Stop Signal Task, working memory was assessed by the Spatial Working Memory Test, while perceptions of memory functioning were assessed by The Everyday Memory Questionnaire-Revised. Results: Participants in the training group showed a significant improvement on the post-tests of inhibitory control when compared with the comparison group ( p = 0.025). The groups did not differ on the post-tests of working memory. Both groups reported less memory problems at post-testing, but there was no sizeable difference between the two groups. Conclusions: Results indicate that working memory training does not improve general working memory capacity per se . Nor does it seem to give any added effects in terms of targeting and improving self-perceived memory functioning. Results do, however, provide evidence to suggest that inhibitory control is accessible and susceptible to modification by adaptive working memory training.

  2. Memory training interventions for older adults: A meta-analysis

    PubMed Central

    Gross, Alden L.; Parisi, Jeanine M.; Spira, Adam P.; Kueider, Alexandra M.; Ko, Jean Y.; Saczynski, Jane S.; Samus, Quincy M.; Rebok, George W.

    2012-01-01

    A systematic review and meta-analysis of memory training research was conducted to characterize the effect of memory strategies on memory performance among cognitively intact, community-dwelling older adults, and to identify characteristics of individuals and of programs associated with improved memory. The review identified 402 publications, of which 35 studies met criteria for inclusion. The overall effect size estimate, representing the mean standardized difference in pre-post change between memory-trained and control groups, was 0.31 standard deviations (SD; 95% confidence interval (CI): 0.22, 0.39). The pre-post training effect for memory-trained interventions was 0.43 SD (95% CI: 0.29, 0.57) and the practice effect for control groups was 0.06 SD (95% CI: -0.05, 0.16). Among 10 distinct memory strategies identified in studies, meta-analytic methods revealed that training multiple strategies was associated with larger training gains (p=0.04), although this association did not reach statistical significance after adjusting for multiple comparisons. Treatment gains among memory-trained individuals were not better after training in any particular strategy, or by the average age of participants, session length, or type of control condition. These findings can inform the design of future memory training programs for older adults. PMID:22423647

  3. Effector CD8 T cells dedifferentiate into long-lived memory cells.

    PubMed

    Youngblood, Ben; Hale, J Scott; Kissick, Haydn T; Ahn, Eunseon; Xu, Xiaojin; Wieland, Andreas; Araki, Koichi; West, Erin E; Ghoneim, Hazem E; Fan, Yiping; Dogra, Pranay; Davis, Carl W; Konieczny, Bogumila T; Antia, Rustom; Cheng, Xiaodong; Ahmed, Rafi

    2017-12-21

    Memory CD8 T cells that circulate in the blood and are present in lymphoid organs are an essential component of long-lived T cell immunity. These memory CD8 T cells remain poised to rapidly elaborate effector functions upon re-exposure to pathogens, but also have many properties in common with naive cells, including pluripotency and the ability to migrate to the lymph nodes and spleen. Thus, memory cells embody features of both naive and effector cells, fuelling a long-standing debate centred on whether memory T cells develop from effector cells or directly from naive cells. Here we show that long-lived memory CD8 T cells are derived from a subset of effector T cells through a process of dedifferentiation. To assess the developmental origin of memory CD8 T cells, we investigated changes in DNA methylation programming at naive and effector cell-associated genes in virus-specific CD8 T cells during acute lymphocytic choriomeningitis virus infection in mice. Methylation profiling of terminal effector versus memory-precursor CD8 T cell subsets showed that, rather than retaining a naive epigenetic state, the subset of cells that gives rise to memory cells acquired de novo DNA methylation programs at naive-associated genes and became demethylated at the loci of classically defined effector molecules. Conditional deletion of the de novo methyltransferase Dnmt3a at an early stage of effector differentiation resulted in reduced methylation and faster re-expression of naive-associated genes, thereby accelerating the development of memory cells. Longitudinal phenotypic and epigenetic characterization of the memory-precursor effector subset of virus-specific CD8 T cells transferred into antigen-free mice revealed that differentiation to memory cells was coupled to erasure of de novo methylation programs and re-expression of naive-associated genes. Thus, epigenetic repression of naive-associated genes in effector CD8 T cells can be reversed in cells that develop into long-lived memory CD8 T cells while key effector genes remain demethylated, demonstrating that memory T cells arise from a subset of fate-permissive effector T cells.

  4. Contrasting single and multi-component working-memory systems in dual tasking.

    PubMed

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-05-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Resistance Exercise Reduces Seizure Occurrence, Attenuates Memory Deficits and Restores BDNF Signaling in Rats with Chronic Epilepsy.

    PubMed

    de Almeida, Alexandre Aparecido; Gomes da Silva, Sérgio; Lopim, Glauber Menezes; Vannucci Campos, Diego; Fernandes, Jansen; Cabral, Francisco Romero; Arida, Ricardo Mario

    2017-04-01

    Epilepsy is a disease characterized by recurrent, unprovoked seizures. Cognitive impairment is an important comorbidity of chronic epilepsy. Human and animal model studies of epilepsy have shown that aerobic exercise induces beneficial structural and functional changes and reduces the number of seizures. However, little is yet understood about the effects of resistance exercise on epilepsy. We evaluated the effects of a resistance exercise program on the number of seizures, long-term memory and expression/activation of signaling proteins in rats with epilepsy. The number of seizures was quantified by video-monitoring and long-term memory was assessed by an inhibitory avoidance test. Using western blotting, multiplex and enzyme-linked immunosorbent assays, we determined the effects of a 4-week resistance exercise program on IGF-1 and BDNF levels and ERK, CREB, mTOR activation in the hippocampus of rats with epilepsy. Rats with epilepsy submitted to resistance exercise showed a decrease in the number of seizures compared to non-exercised epileptic rats. Memory deficits were attenuated by resistance exercise. Rats with epilepsy showed an increase in IGF-1 levels which were restored to control levels by resistance exercise. BDNF levels and ERK and mTOR activation were decreased in rats with epilepsy and resistance exercise restored these to control levels. In conclusion, resistance exercise reduced seizure occurrence and mitigated memory deficits in rats with epilepsy. These resistance exercise-induced beneficial effects can be related to changes in IGF-1 and BDNF levels and its signaling protein activation. Our findings indicate that the resistance exercise might be included as complementary therapeutic strategy for epilepsy treatment.

  6. Schedulers with load-store queue awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.

    2017-02-07

    In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.

  7. Schedulers with load-store queue awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.

    2017-01-24

    In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.

  8. Parallelization and automatic data distribution for nuclear reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less

  9. Targeting latent function: Encouraging effective encoding for successful memory training and transfer

    PubMed Central

    Lustig, Cindy; Flegal, Kristin E.

    2009-01-01

    Cognitive training programs for older adults often result in improvements at the group level. However, there are typically large age and individual differences in the size of training benefits. These differences may be related to the degree to which participants implement the processes targeted by the training program. To test this possibility, we tested older adults in a memory-training procedure either under specific strategy instructions designed to encourage semantic, integrative encoding, or in a condition that encouraged time and attention to encoding but allowed participants to choose their own strategy. Both conditions improved the performance of old-old adults relative to an earlier study (Bissig & Lustig, 2007) and reduced self-reports of everyday memory errors. Performance in the strategy-instruction group was related to pre-existing ability, performance in the strategy-choice group was not. The strategy-choice group performed better on a laboratory transfer test of recognition memory, and training performance was correlated with reduced everyday memory errors. Training programs that target latent but inefficiently-used abilities while allowing flexibility in bringing those abilities to bear may best promote effective training and transfer. PMID:19140647

  10. A Memory-Process Model of Symbolic Assimilation

    DTIC Science & Technology

    1974-04-01

    Systems: Final Report of a Study Group, published for Artificial Intellegence by North-Holland/Amorican...contribution of the methods is answered by evaluating the same program in the context of the field of artificial intelligence. The remainder of the...been widely demonstrated on a diversity of tasks in tha history of artificial intelligence. See [r.71], chapter 2. Given a particular task to be

  11. A simplified computational memory model from information processing

    PubMed Central

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  12. CREB and the discovery of cognitive enhancers.

    PubMed

    Scott, Roderick; Bourtchuladze, Rusiko; Gossweiler, Scott; Dubnau, Josh; Tully, Tim

    2002-01-01

    In the past few years, a series of molecular-genetic, biochemical, cellular and behavioral studies in fruit flies, sea slugs and mice have confirmed a long-standing notion that long-term memory formation depends on the synthesis of new proteins. Experiments focused on the cAMP-responsive transcription factor, CREB, have established that neural activity-induced regulation of gene transcription promotes a synaptic growth process that strengthens the connections among active neurons. This process constitutes a physical basis for the engram--and CREB is a "molecular switch" to produce the engram. Helicon Therapeutics has been formed to identify drug compounds that enhance memory formation via augmentation of CREB biochemistry. Candidate compounds have been identified from a high throughput cell-based screen and are being evaluated in animal models of memory formation. A gene discovery program also seeks to identify new genes, which function downstream of CREB during memory formation, as a source for new drug discoveries in the future. Together, these drug and gene discovery efforts promise new class of pharmaceutical therapies for the treatment of various forms of cognitive dysfunction.

  13. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  14. GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil

    2015-11-15

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less

  15. Architecture, Design and Implementation of RC64, a Many-Core High-Performance DSP for Space Applications

    NASA Astrophysics Data System (ADS)

    Ginosar, Ran; Aviely, Peleg; Liran, Tuvia; Alon, Dov; Dobkin, Reuven; Goldberg, Michael

    2013-08-01

    RC64, a novel 64-core many-core signal processing chip targets DSP performance of 12.8 GIPS, 100 GOPS and 12.8 single precision GFLOS while dissipating only 3 Watts. RC64 employs advanced DSP cores, a multi-bank shared memory and a hardware scheduler, supports DDR2 memory and communicates over five proprietary 6.4 Gbps channels. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 200 MHz ASIC on Tower 130nm CMOS technology, assembled in hermetically sealed ceramic QFP package and qualified to the highest space standards.

  16. SCELib2: the new revision of SCELib, the parallel computational library of molecular properties in the single center approach

    NASA Astrophysics Data System (ADS)

    Sanna, N.; Morelli, G.

    2004-09-01

    In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes No. of bytes in distributed program, including test data, etc.: 3 798 507 No. of lines in distributed program, including test data, etc.: 187 226 Distribution format: tar.gz Nature of physical problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and exchange/correlation potentials can then be used via a proper Application Programming Interface (API) to describe the target molecular system which can be employed in electron-molecule scattering calculations. The molecular properties expanded over a single center turn out to also be of more general application and some possible uses in quantum chemistry, biomodelling and drug design are also outlined. Method of solution: The polycentre Hartee-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss-Legendre/Chebyschev quadrature over the θ, φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behaviour for the leading dipole molecular polarizabilities. Restrictions on the complexity of the problem: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. Typical running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r, θ, φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. Unusual features of the program: The code has been engineered to use dynamical, runtime determined, global parameters with the aim to have all the data fitted in the RAM memory. Some unusual circumstances, e.g., when using large values of those parameters, may cause the program to run with unexpected performance reductions due to runtime bottlenecks like those caused by memory swap operations which strongly depend on the hardware used. In such cases, a parallel execution of the code is generally sufficient to fix the problem since the data size is partitioned over the available processors. When a suitable parallel system is not available for execution, a mechanism of memory mapped file can be used; with this option on, all the available memory will be used as a buffer for a disk file which contains the whole data set, thus having a better throughput with respect to the traditional swapping/paging of the Unix OS.

  17. Effects of Working Memory Capacity and Domain Knowledge on Recall for Grocery Prices.

    PubMed

    Bermingham, Douglas; Gardner, Michael K; Woltz, Dan J

    2016-01-01

    Hambrick and Engle (2002) proposed 3 models of how domain knowledge and working memory capacity may work together to influence episodic memory: a "rich-get-richer" model, a "building blocks" model, and a "compensatory" model. Their results supported the rich-get-richer model, although later work by Hambrick and Oswald (2005) found support for a building blocks model. We investigated the effects of domain knowledge and working memory on recall of studied grocery prices. Working memory was measured with 3 simple span tasks. A contrast of realistic versus fictitious foods in the episodic memory task served as our manipulation of domain knowledge, because participants could not have domain knowledge of fictitious food prices. There was a strong effect for domain knowledge (realistic food-price pairs were easier to remember) and a moderate effect for working memory capacity (higher working memory capacity produced better recall). Furthermore, the interaction between domain knowledge and working memory produced a small but significant interaction in 1 measure of price recall. This supported the compensatory model and stands in contrast to previous research.

  18. Elements of episodic-like memory in animal models.

    PubMed

    Crystal, Jonathon D

    2009-03-01

    Representations of unique events from one's past constitute the content of episodic memories. A number of studies with non-human animals have revealed that animals remember specific episodes from their past (referred to as episodic-like memory). The development of animal models of memory holds enormous potential for gaining insight into the biological bases of human memory. Specifically, given the extensive knowledge of the rodent brain, the development of rodent models of episodic memory would open new opportunities to explore the neuroanatomical, neurochemical, neurophysiological, and molecular mechanisms of memory. Development of such animal models holds enormous potential for studying functional changes in episodic memory in animal models of Alzheimer's disease, amnesia, and other human memory pathologies. This article reviews several approaches that have been used to assess episodic-like memory in animals. The approaches reviewed include the discrimination of what, where, and when in a radial arm maze, dissociation of recollection and familiarity, object recognition, binding, unexpected questions, and anticipation of a reproductive state. The diversity of approaches may promote the development of converging lines of evidence on the difficult problem of assessing episodic-like memory in animals.

  19. Modeling individual differences in working memory performance: a source activation account

    PubMed Central

    Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.

    2008-01-01

    Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561

  20. Mental Fitness for Life: Assessing the Impact of an 8-Week Mental Fitness Program on Healthy Aging.

    ERIC Educational Resources Information Center

    Cusack, Sandra A.; Thompson, Wendy J. A.; Rogers, Mary E.

    2003-01-01

    A mental fitness program taught goal setting, critical thinking, creativity, positive attitudes, learning, memory, and self-expression to adults over 50 (n=22). Pre/posttests of depression and cognition revealed significant impacts on mental fitness, cognitive confidence, goal setting, optimism, creativity, flexibility, and memory. Not significant…

  1. A Balancing Act: Interpreting Tragedy at the 9/11 Memorial Museum

    ERIC Educational Resources Information Center

    Rauch, Noah

    2018-01-01

    The 9/11 Memorial Museum's docent program offers visitors artifact-based entry-points into a difficult, emotional history. The program's launch raised a host of questions, many centered on how to balance and convey strongly held, often traumatic, and sometimes conflicting experiences with a newly constructed institutional narrative. This article…

  2. Fast Initialization of Bubble-Memory Systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1986-01-01

    Improved scheme several orders of magnitude faster than normal initialization scheme. State-of-the-art commercial bubble-memory device used. Hardware interface designed connects controlling microprocessor to bubblememory circuitry. System software written to exercise various functions of bubble-memory system in comparison made between normal and fast techniques. Future implementations of approach utilize E2PROM (electrically-erasable programable read-only memory) to provide greater system flexibility. Fastinitialization technique applicable to all bubble-memory devices.

  3. Design and Rationale of the Cognitive Intervention to Improve Memory in Heart Failure Patients Study.

    PubMed

    Pressler, Susan J; Giordani, Bruno; Titler, Marita; Gradus-Pizlo, Irmina; Smith, Dean; Dorsey, Susan G; Gao, Sujuan; Jung, Miyeon

    Memory loss is an independent predictor of mortality among heart failure patients. Twenty-three percent to 50% of heart failure patients have comorbid memory loss, but few interventions are available to treat the memory loss. The aims of this 3-arm randomized controlled trial were to (1) evaluate efficacy of computerized cognitive training intervention using BrainHQ to improve primary outcomes of memory and serum brain-derived neurotrophic factor levels and secondary outcomes of working memory, instrumental activities of daily living, and health-related quality of life among heart failure patients; (2) evaluate incremental cost-effectiveness of BrainHQ; and (3) examine depressive symptoms and genomic moderators of BrainHQ effect. A sample of 264 heart failure patients within 4 equal-sized blocks (normal/low baseline cognitive function and gender) will be randomly assigned to (1) BrainHQ, (2) active control computer-based crossword puzzles, and (3) usual care control groups. BrainHQ is an 8-week, 40-hour program individualized to each patient's performance. Data collection will be completed at baseline and at 10 weeks and 4 and 8 months. Descriptive statistics, mixed model analyses, and cost-utility analysis using intent-to-treat approach will be computed. This research will provide new knowledge about the efficacy of BrainHQ to improve memory and increase serum brain-derived neurotrophic factor levels in heart failure. If efficacious, the intervention will provide a new therapeutic approach that is easy to disseminate to treat a serious comorbid condition of heart failure.

  4. Decoding memory features from hippocampal spiking activities using sparse classification models.

    PubMed

    Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W

    2016-08-01

    To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.

  5. Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sczyrba, Alex; Pratap, Abhishek; Canon, Shane

    2011-03-22

    Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less

  6. Time Constraints and Resource Sharing in Adults' Working Memory Spans

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Bernardin, Sophie; Camos, Valerie

    2004-01-01

    This article presents a new model that accounts for working memory spans in adults, the time-based resource-sharing model. The model assumes that both components (i.e., processing and maintenance) of the main working memory tasks require attention and that memory traces decay as soon as attention is switched away. Because memory retrievals are…

  7. Stretching-induced nanostructures on shape memory polyurethane films and their regulation to osteoblasts morphology.

    PubMed

    Xing, Juan; Ma, Yufei; Lin, Manping; Wang, Yuanliang; Pan, Haobo; Ruan, Changshun; Luo, Yanfeng

    2016-10-01

    Programming such as stretching, compression and bending is indispensible to endow polyurethanes with shape memory effects. Despite extensive investigations on the contributions of programming processes to the shape memory effects of polyurethane, less attention has been paid to the nanostructures of shape memory polyurethanes surface during the programming process. Here we found that stretching could induce the reassembly of hard domains and thereby change the nanostructures on the film surfaces with dependence on the stretching ratios (0%, 50%, 100%, and 200%). In as-cast polyurethane films, hard segments sequentially assembled into nano-scale hard domains, round or fibrillar islands, and fibrillar apophyses. Upon stretching, the islands packed along the stretching axis to form reoriented fibrillar apophyses along the stretching direction. Stretching only changed the chemical patterns on polyurethane films without significantly altering surface roughness, with the primary composition of fibrillar apophyses being hydrophilic hard domains. Further analysis of osteoblasts morphology revealed that the focal adhesion formation and osteoblasts orientation were in accordance with the chemical patterns of the underlying stretched films, which corroborates the vital roles of stretching-induced nanostructures in regulating osteoblasts morphology. These novel findings suggest that programming might hold great potential for patterning polyurethane surfaces so as to direct cellular behavior. In addition, this work lays groundwork for guiding the programming of shape memory polyurethanes to produce appropriate nanostructures for predetermined medical applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Cognitive stimulation intervention for elders with mild cognitive impairment compared with normal aged subjects: preliminary results.

    PubMed

    Wenisch, Emilie; Cantegreil-Kallen, Inge; De Rotrou, Jocelyne; Garrigue, Pia; Moulin, Florence; Batouche, Fériel; Richard, Aurore; De Sant'Anna, Martha; Rigaud, Anne Sophie

    2007-08-01

    Cognitive training programs have been developed for Alzheimer's disease patients and the healthy elderly population. Collective cognitive stimulation programs have been shown to be efficient for subjects with memory complaint. The aim of this study was to evaluate the benefit of such cognitive programs in populations with Mild Cognitive Impairment (MCI). Twelve patients with MCI and twelve cognitively normal elders were administered a cognitive stimulation program. Cognitive performance (Logical Memory, Word paired associative learning task, Trail Making Test, verbal fluency test) were collected before and after the intervention. A gain score [(post-score - pre-score)/ pre-score] was calculated for each variable and compared between groups. The analysis revealed a larger intervention size effect in MCI than in normal elders' performances on the associative learning task (immediate recall: p<0.05, delayed recall: p<0.01). The intervention was more beneficial in improving associative memory abilities in MCI than in normal subjects. At the end of the intervention, the MCI group had lower results than the normal group only for the delayed recall of Logical Memory. Although further studies are needed for more details on the impact of cognitive stimulation programs on MCI patients, this intervention is effective in compensating associative memory difficulties of these patients. Among non-pharmacological interventions, cognitive stimulation therapy is a repeatable and inexpensive collective method that can easily be provided to various populations with the aim of slowing down the rate of decline in elderly persons with cognitive impairment.

  9. Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors

    PubMed Central

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2016-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548

  10. Categorical working memory representations are used in delayed estimation of continuous colors.

    PubMed

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2017-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. A Cache Design to Exploit Structural Locality

    DTIC Science & Technology

    1991-12-01

    memory and secondary storage. Main memory was used to store the instructions and data of an executing pro- gram, while secondary storage held programs ...efficiency of the CPU and faster turnaround of executing programs . In addition to the well known spatial and temporal aspects of locality, Hobart has...identified a third aspect, which he has called structural locality (9). This type of locality is defined as the tendency of an executing program to

  12. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.

  13. The FORCE - A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  14. The FORCE: A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  15. Virtual memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today.

  16. The effects of drumming on working memory in older adults.

    PubMed

    Degé, Franziska; Kerkovius, Katharina

    2018-05-04

    Our study investigated the effect of a music training program on working memory (verbal memory, visual memory, and as a part of central executive processing working memory) in older adults. The experimental group was musically trained (drumming and singing), whereas one control group received a literature training program and a second control group was untrained. We randomly assigned 24 participants (all females; M = 77 years and 3 months) to the music group, the literature group, and the untrained group. The training groups were trained for 15 weeks. The three groups did not differ significantly in age, socioeconomic status, music education, musical aptitude, cognitive abilities, or depressive symptoms. We did not find differences in the music group in central executive function. However, we found a potential effect of music training on verbal memory and an impact of music training on visual memory. Musically trained participants remembered more words from a word list than both control groups, and they were able to remember more symbol sequences correctly than the control groups. Our findings show a possible effect of music training on verbal and visual memory in older people. © 2018 New York Academy of Sciences.

  17. FOXO1 opposition of CD8+ T cell effector programming confers early memory properties and phenotypic diversity.

    PubMed

    Delpoux, Arnaud; Lai, Chen-Yen; Hedrick, Stephen M; Doedens, Andrew L

    2017-10-17

    The factors and steps controlling postinfection CD8 + T cell terminal effector versus memory differentiation are incompletely understood. Whereas we found that naive TCF7 (alias "Tcf-1") expression is FOXO1 independent, early postinfection we report bimodal, FOXO1-dependent expression of the memory-essential transcription factor TCF7 in pathogen-specific CD8 + T cells. We determined the early postinfection TCF7 high population is marked by low TIM3 expression and bears memory signature hallmarks before the appearance of established memory precursor marker CD127 (IL-7R). These cells exhibit diminished TBET, GZMB, mTOR signaling, and cell cycle progression. Day 5 postinfection, TCF7 high cells express higher memory-associated BCL2 and EOMES, as well as increased accumulation potential and capacity to differentiate into memory phenotype cells. TCF7 retroviral transduction opposes GZMB expression and the formation of KLRG1 pos phenotype cells, demonstrating an active role for TCF7 in extinguishing the effector program and forestalling terminal differentiation. Past the peak of the cellular immune response, we report a gradient of FOXO1 and TCF7 expression, which functions to oppose TBET and orchestrate a continuum of effector-to-memory phenotypes.

  18. A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoemmen, Mark

    2010-11-01

    Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches formore » orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.« less

  19. Vienna FORTRAN: A FORTRAN language extension for distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1991-01-01

    Exploiting the performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna FORTRAN is a language extension of FORTRAN which provides the user with a wide range of facilities for such mapping of data structures. However, programs in Vienna FORTRAN are written using global data references. Thus, the user has the advantage of a shared memory programming paradigm while explicitly controlling the placement of data. The basic features of Vienna FORTRAN are presented along with a set of examples illustrating the use of these features.

  20. Multi-shape active composites by 3D printing of digital shape memory polymers

    NASA Astrophysics Data System (ADS)

    Wu, Jiangtao; Yuan, Chao; Ding, Zhen; Isakov, Michael; Mao, Yiqi; Wang, Tiejun; Dunn, Martin L.; Qi, H. Jerry

    2016-04-01

    Recent research using 3D printing to create active structures has added an exciting new dimension to 3D printing technology. After being printed, these active, often composite, materials can change their shape over time; this has been termed as 4D printing. In this paper, we demonstrate the design and manufacture of active composites that can take multiple shapes, depending on the environmental temperature. This is achieved by 3D printing layered composite structures with multiple families of shape memory polymer (SMP) fibers - digital SMPs - with different glass transition temperatures (Tg) to control the transformation of the structure. After a simple single-step thermomechanical programming process, the fiber families can be sequentially activated to bend when the temperature is increased. By tuning the volume fraction of the fibers, bending deformation can be controlled. We develop a theoretical model to predict the deformation behavior for better understanding the phenomena and aiding the design. We also design and print several flat 2D structures that can be programmed to fold and open themselves when subjected to heat. With the advantages of an easy fabrication process and the controllable multi-shape memory effect, the printed SMP composites have a great potential in 4D printing applications.

  1. Multi-shape active composites by 3D printing of digital shape memory polymers.

    PubMed

    Wu, Jiangtao; Yuan, Chao; Ding, Zhen; Isakov, Michael; Mao, Yiqi; Wang, Tiejun; Dunn, Martin L; Qi, H Jerry

    2016-04-13

    Recent research using 3D printing to create active structures has added an exciting new dimension to 3D printing technology. After being printed, these active, often composite, materials can change their shape over time; this has been termed as 4D printing. In this paper, we demonstrate the design and manufacture of active composites that can take multiple shapes, depending on the environmental temperature. This is achieved by 3D printing layered composite structures with multiple families of shape memory polymer (SMP) fibers - digital SMPs - with different glass transition temperatures (Tg) to control the transformation of the structure. After a simple single-step thermomechanical programming process, the fiber families can be sequentially activated to bend when the temperature is increased. By tuning the volume fraction of the fibers, bending deformation can be controlled. We develop a theoretical model to predict the deformation behavior for better understanding the phenomena and aiding the design. We also design and print several flat 2D structures that can be programmed to fold and open themselves when subjected to heat. With the advantages of an easy fabrication process and the controllable multi-shape memory effect, the printed SMP composites have a great potential in 4D printing applications.

  2. Multi-shape active composites by 3D printing of digital shape memory polymers

    PubMed Central

    Wu, Jiangtao; Yuan, Chao; Ding, Zhen; Isakov, Michael; Mao, Yiqi; Wang, Tiejun; Dunn, Martin L.; Qi, H. Jerry

    2016-01-01

    Recent research using 3D printing to create active structures has added an exciting new dimension to 3D printing technology. After being printed, these active, often composite, materials can change their shape over time; this has been termed as 4D printing. In this paper, we demonstrate the design and manufacture of active composites that can take multiple shapes, depending on the environmental temperature. This is achieved by 3D printing layered composite structures with multiple families of shape memory polymer (SMP) fibers – digital SMPs - with different glass transition temperatures (Tg) to control the transformation of the structure. After a simple single-step thermomechanical programming process, the fiber families can be sequentially activated to bend when the temperature is increased. By tuning the volume fraction of the fibers, bending deformation can be controlled. We develop a theoretical model to predict the deformation behavior for better understanding the phenomena and aiding the design. We also design and print several flat 2D structures that can be programmed to fold and open themselves when subjected to heat. With the advantages of an easy fabrication process and the controllable multi-shape memory effect, the printed SMP composites have a great potential in 4D printing applications. PMID:27071543

  3. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  4. A visual LISP program for voxelizing AutoCAD solid models

    NASA Astrophysics Data System (ADS)

    Marschallinger, Robert; Jandrisevits, Carmen; Zobl, Fritz

    2015-01-01

    AutoCAD solid models are increasingly recognized in geological and geotechnical 3D modeling. In order to bridge the currently existing gap between AutoCAD solid models and the grid modeling realm, a Visual LISP program is presented that converts AutoCAD solid models into voxel arrays. Acad2Vox voxelizer works on a 3D-model that is made up of arbitrary non-overlapping 3D-solids. After definition of the target voxel array geometry, 3D-solids are scanned at grid positions and properties are streamed to an ASCII output file. Acad2Vox has a novel voxelization strategy that combines a hierarchical reduction of sampling dimensionality with an innovative use of AutoCAD-specific methods for a fast and memory-saving operation. Acad2Vox provides georeferenced, voxelized analogs of 3D design data that can act as regions-of-interest in later geostatistical modeling and simulation. The Supplement includes sample geological solid models with instructions for practical work with Acad2Vox.

  5. Sleep on your memory traces: How sleep effects can be explained by Act-In, a functional memory model.

    PubMed

    Cherdieu, Mélaine; Versace, Rémy; Rey, Amandine E; Vallet, Guillaume T; Mazza, Stéphanie

    2018-06-01

    Numerous studies have explored the effect of sleep on memory. It is well known that a period of sleep, compared to a similar period of wakefulness, protects memories from interference, improves performance, and might also reorganize memory traces in a way that encourages creativity and rule extraction. It is assumed that these benefits come from the reactivation of brain networks, mainly involving the hippocampal structure, as well as from their synchronization with neocortical networks during sleep, thereby underpinning sleep-dependent memory consolidation and reorganization. However, this memory reorganization is difficult to explain within classical memory models. The present paper aims to describe whether the influence of sleep on memory could be explained using a multiple trace memory model that is consistent with the concept of embodied cognition: the Act-In (activation-integration) memory model. We propose an original approach to the results observed in sleep research on the basis of two simple mechanisms, namely activation and integration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Self-help memory training for healthy older adults in a residential care center: specific and transfer effects on performance and beliefs.

    PubMed

    Cavallini, Elena; Bottiroli, Sara; Capotosto, Emanuela; De Beni, Rossana; Pavan, Giorgio; Vecchi, Tomaso; Borella, Erika

    2015-08-01

    Cognitive flexibility has repeatedly been shown to improve after training programs in community-dwelling older adults, but few studies have focused on healthy older adults living in other settings. This study investigated the efficacy of self-help training for healthy older adults in a residential care center on memory tasks they practiced (associative and object list learning tasks) and any transfer to other tasks (grocery lists, face-name learning, figure-word pairing, word lists, and text learning). Transfer effects on everyday life (using a problem-solving task) and on participants' beliefs regarding their memory (efficacy and control) were also examined. With the aid of a manual, the training adopted a learner-oriented approach that directly encouraged learners to generalize strategic behavior to new tasks. The maintenance of any training benefits was assessed after 6 months. The study involved 34 residential care center residents (aged 70-99 years old) with no cognitive impairments who were randomly assigned to two programs: the experimental group followed the self-help training program, whereas the active control group was involved in general cognitive stimulation activities. Training benefits emerged in the trained group for the tasks that were practiced. Transfer effects were found in memory and everyday problem-solving tasks and on memory beliefs. The effects of training were generally maintained in both practiced and unpracticed memory tasks. These results demonstrate that learner-oriented self-help training enhances memory performance and memory beliefs, in the short term at least, even in residential care center residents. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Attempting to model dissociations of memory.

    PubMed

    Reber, Paul J.

    2002-05-01

    Kinder and Shanks report simulations aimed at describing a single-system model of the dissociation between declarative and non-declarative memory. This model attempts to capture both Artificial Grammar Learning (AGL) and recognition memory with a single underlying representation. However, the model fails to reflect an essential feature of recognition memory - that it occurs after a single exposure - and the simulations may instead describe a potentially interesting property of over-training non-declarative memory.

  8. Models for Total-Dose Radiation Effects in Non-Volatile Memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip Montgomery; Wix, Steven D.

    The objective of this work is to develop models to predict radiation effects in non- volatile memory: flash memory and ferroelectric RAM. In flash memory experiments have found that the internal high-voltage generators (charge pumps) are the most sensitive to radiation damage. Models are presented for radiation effects in charge pumps that demonstrate the experimental results. Floating gate models are developed for the memory cell in two types of flash memory devices by Intel and Samsung. These models utilize Fowler-Nordheim tunneling and hot electron injection to charge and erase the floating gate. Erase times are calculated from the models andmore » compared with experimental results for different radiation doses. FRAM is less sensitive to radiation than flash memory, but measurements show that above 100 Krad FRAM suffers from a large increase in leakage current. A model for this effect is developed which compares closely with the measurements.« less

  9. Efficient partitioning and assignment on programs for multiprocessor execution

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1993-01-01

    The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.

  10. Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers

    NASA Technical Reports Server (NTRS)

    Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj

    1995-01-01

    The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.

  11. Phyx: phylogenetic tools for unix.

    PubMed

    Brown, Joseph W; Walker, Joseph F; Smith, Stephen A

    2017-06-15

    The ease with which phylogenomic data can be generated has drastically escalated the computational burden for even routine phylogenetic investigations. To address this, we present phyx : a collection of programs written in C ++ to explore, manipulate, analyze and simulate phylogenetic objects (alignments, trees and MCMC logs). Modelled after Unix/GNU/Linux command line tools, individual programs perform a single task and operate on standard I/O streams that can be piped to quickly and easily form complex analytical pipelines. Because of the stream-centric paradigm, memory requirements are minimized (often only a single tree or sequence in memory at any instance), and hence phyx is capable of efficiently processing very large datasets. phyx runs on POSIX-compliant operating systems. Source code, installation instructions, documentation and example files are freely available under the GNU General Public License at https://github.com/FePhyFoFum/phyx. eebsmith@umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  12. Phyx: phylogenetic tools for unix

    PubMed Central

    Brown, Joseph W.; Walker, Joseph F.; Smith, Stephen A.

    2017-01-01

    Abstract Summary: The ease with which phylogenomic data can be generated has drastically escalated the computational burden for even routine phylogenetic investigations. To address this, we present phyx: a collection of programs written in C ++ to explore, manipulate, analyze and simulate phylogenetic objects (alignments, trees and MCMC logs). Modelled after Unix/GNU/Linux command line tools, individual programs perform a single task and operate on standard I/O streams that can be piped to quickly and easily form complex analytical pipelines. Because of the stream-centric paradigm, memory requirements are minimized (often only a single tree or sequence in memory at any instance), and hence phyx is capable of efficiently processing very large datasets. Availability and Implementation: phyx runs on POSIX-compliant operating systems. Source code, installation instructions, documentation and example files are freely available under the GNU General Public License at https://github.com/FePhyFoFum/phyx Contact: eebsmith@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28174903

  13. Preconditioned implicit solvers for the Navier-Stokes equations on distributed-memory machines

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liou, Meng-Sing; Dyson, Rodger W.

    1994-01-01

    The GMRES method is parallelized, and combined with local preconditioning to construct an implicit parallel solver to obtain steady-state solutions for the Navier-Stokes equations of fluid flow on distributed-memory machines. The new implicit parallel solver is designed to preserve the convergence rate of the equivalent 'serial' solver. A static domain-decomposition is used to partition the computational domain amongst the available processing nodes of the parallel machine. The SPMD (Single-Program Multiple-Data) programming model is combined with message-passing tools to develop the parallel code on a 32-node Intel Hypercube and a 512-node Intel Delta machine. The implicit parallel solver is validated for internal and external flow problems, and is found to compare identically with flow solutions obtained on a Cray Y-MP/8. A peak computational speed of 2300 MFlops/sec has been achieved on 512 nodes of the Intel Delta machine,k for a problem size of 1024 K equations (256 K grid points).

  14. Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques

    ERIC Educational Resources Information Center

    Ullman, Michael T.; Lovelett, Jarrett T.

    2018-01-01

    The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…

  15. Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs

    NASA Astrophysics Data System (ADS)

    Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian

    2011-02-01

    As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.

  16. Short-Term Memory and Aphasia: From Theory to Treatment.

    PubMed

    Minkina, Irene; Rosenberg, Samantha; Kalinyak-Fliszar, Michelene; Martin, Nadine

    2017-02-01

    This article reviews existing research on the interactions between verbal short-term memory and language processing impairments in aphasia. Theoretical models of short-term memory are reviewed, starting with a model assuming a separation between short-term memory and language, and progressing to models that view verbal short-term memory as a cognitive requirement of language processing. The review highlights a verbal short-term memory model derived from an interactive activation model of word retrieval. This model holds that verbal short-term memory encompasses the temporary activation of linguistic knowledge (e.g., semantic, lexical, and phonological features) during language production and comprehension tasks. Empirical evidence supporting this model, which views short-term memory in the context of the processes it subserves, is outlined. Studies that use a classic measure of verbal short-term memory (i.e., number of words/digits correctly recalled in immediate serial recall) as well as those that use more intricate measures (e.g., serial position effects in immediate serial recall) are discussed. Treatment research that uses verbal short-term memory tasks in an attempt to improve language processing is then summarized, with a particular focus on word retrieval. A discussion of the limitations of current research and possible future directions concludes the review. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Short-Term Memory and Aphasia: From Theory to Treatment

    PubMed Central

    Minkina, Irene; Rosenberg, Samantha; Kalinyak-Fliszar, Michelene; Martin, Nadine

    2018-01-01

    This article reviews existing research on the interactions between verbal short-term memory and language processing impairments in aphasia. Theoretical models of short-term memory are reviewed, starting with a model assuming a separation between short-term memory and language, and progressing to models that view verbal short-term memory as a cognitive requirement of language processing. The review highlights a verbal short-term memory model derived from an interactive activation model of word retrieval. This model holds that verbal short-term memory encompasses the temporary activation of linguistic knowledge (e.g., semantic, lexical, and phonological features) during language production and comprehension tasks. Empirical evidence supporting this model, which views short-term memory in the context of the processes it subserves, is outlined. Studies that use a classic measure of verbal short-term memory (i.e., number of words/digits correctly recalled in immediate serial recall) as well as those that use more intricate measures (e.g., serial position effects in immediate serial recall) are discussed. Treatment research that uses verbal short-term memory tasks in an attempt to improve language processing is then summarized, with a particular focus on word retrieval. A discussion of the limitations of current research and possible future directions concludes the review. PMID:28201834

  18. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  19. Programming in Vienna Fortran

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1992-01-01

    Exploiting the full performance potential of distributed memory machines requires a careful distribution of data across the processors. Vienna Fortran is a language extension of Fortran which provides the user with a wide range of facilities for such mapping of data structures. In contrast to current programming practice, programs in Vienna Fortran are written using global data references. Thus, the user has the advantages of a shared memory programming paradigm while explicitly controlling the data distribution. In this paper, we present the language features of Vienna Fortran for FORTRAN 77, together with examples illustrating the use of these features.

  20. A model of memory impairment in schizophrenia: cognitive and clinical factors associated with memory efficiency and memory errors.

    PubMed

    Brébion, Gildas; Bressan, Rodrigo A; Ohlsen, Ruth I; David, Anthony S

    2013-12-01

    Memory impairments in patients with schizophrenia have been associated with various cognitive and clinical factors. Hallucinations have been more specifically associated with errors stemming from source monitoring failure. We conducted a broad investigation of verbal memory and visual memory as well as source memory functioning in a sample of patients with schizophrenia. Various memory measures were tallied, and we studied their associations with processing speed, working memory span, and positive, negative, and depressive symptoms. Superficial and deep memory processes were differentially associated with processing speed, working memory span, avolition, depression, and attention disorders. Auditory/verbal and visual hallucinations were differentially associated with specific types of source memory error. We integrated all the results into a revised version of a previously published model of memory functioning in schizophrenia. The model describes the factors that affect memory efficiency, as well as the cognitive underpinnings of hallucinations within the source monitoring framework. © 2013.

  1. The Communication Link and Error ANalysis (CLEAN) simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.; Crowe, Shane

    1993-01-01

    During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

  2. A general model for memory interference in a multiprocessor system with memory hierarchy

    NASA Technical Reports Server (NTRS)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  3. Thread scheduling for GPU-based OPC simulation on multi-thread

    NASA Astrophysics Data System (ADS)

    Lee, Heejun; Kim, Sangwook; Hong, Jisuk; Lee, Sooryong; Han, Hwansoo

    2018-03-01

    As semiconductor product development based on shrinkage continues, the accuracy and difficulty required for the model based optical proximity correction (MBOPC) is increasing. OPC simulation time, which is the most timeconsuming part of MBOPC, is rapidly increasing due to high pattern density in a layout and complex OPC model. To reduce OPC simulation time, we attempt to apply graphic processing unit (GPU) to MBOPC because OPC process is good to be programmed in parallel. We address some issues that may typically happen during GPU-based OPC simulation in multi thread system, such as "out of memory" and "GPU idle time". To overcome these problems, we propose a thread scheduling method, which manages OPC jobs in multiple threads in such a way that simulations jobs from multiple threads are alternatively executed on GPU while correction jobs are executed at the same time in each CPU cores. It was observed that the amount of GPU peak memory usage decreases by up to 35%, and MBOPC runtime also decreases by 4%. In cases where out of memory issues occur in a multi-threaded environment, the thread scheduler was used to improve MBOPC runtime up to 23%.

  4. Using visual lateralization to model learning and memory in zebrafish larvae

    PubMed Central

    Andersson, Madelene Åberg; Ek, Fredrik; Olsson, Roger

    2015-01-01

    Impaired learning and memory are common symptoms of neurodegenerative and neuropsychiatric diseases. Present, there are several behavioural test employed to assess cognitive functions in animal models, including the frequently used novel object recognition (NOR) test. However, although atypical functional brain lateralization has been associated with neuropsychiatric conditions, spanning from schizophrenia to autism, few animal models are available to study this phenomenon in learning and memory deficits. Here we present a visual lateralization NOR model (VLNOR) in zebrafish larvae as an assay that combines brain lateralization and NOR. In zebrafish larvae, learning and memory are generally assessed by habituation, sensitization, or conditioning paradigms, which are all representatives of nondeclarative memory. The VLNOR is the first model for zebrafish larvae that studies a memory similar to the declarative memory described for mammals. We demonstrate that VLNOR can be used to study memory formation, storage, and recall of novel objects, both short and long term, in 10-day-old zebrafish. Furthermore we show that the VLNOR model can be used to study chemical modulation of memory formation and maintenance using dizocilpine (MK-801), a frequently used non-competitive antagonist of the NMDA receptor, used to test putative antipsychotics in animal models. PMID:25727677

  5. Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989

    NASA Technical Reports Server (NTRS)

    Raugh, Michael R. (Editor)

    1989-01-01

    Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis.

  6. Cognitive intervention through a training program for picture book reading in community-dwelling older adults: a randomized controlled trial.

    PubMed

    Suzuki, Hiroyuki; Kuraoka, Masataka; Yasunaga, Masashi; Nonaka, Kumiko; Sakurai, Ryota; Takeuchi, Rumi; Murayama, Yoh; Ohba, Hiromi; Fujiwara, Yoshinori

    2014-11-21

    Non-pharmacological interventions are expected to be important strategies for reducing the age-adjusted prevalence of senile dementia, considering that complete medical treatment for cognitive decline has not yet been developed. From the viewpoint of long-term continuity of activity, it is necessary to develop various cognitive stimulating programs. The aim of this study is to examine the effectiveness of a cognitive intervention through a training program for picture book reading for community-dwelling older adults. Fifty-eight Japanese older participants were divided into the intervention and control groups using simple randomization (n =29 vs 29). In the intervention group, participants took part in a program aimed at learning and mastering methods of picture book reading as a form of cognitive training intervention. The control group listened to lectures about elderly health maintenance. Cognitive tests were conducted individually before and after the programs. The rate of memory retention, computed by dividing Logical Memory delayed recall by immediate recall, showed a significant interaction (p < .05) in analysis of covariance. Simple main effects showed that the rate of memory retention of the intervention group improved after the program completion (p < .05). In the participants with mild cognitive impairment (MCI) examined by Japanese version of the Montreal Cognitive Assessment (MoCA-J) (n =14 vs 15), significant interactions were seen in Trail Making Test-A (p < .01), Trail Making Test-B (p < .05), Kana pick-out test (p < .05) and the Mini-Mental State Examination (p < .05). The intervention effect was found in delayed verbal memory. This program is also effective for improving attention and executive function in those with MCI. The short-term interventional findings suggest that this program might contribute to preventing a decline in memory and executive function. UMIN000014712 (Date of ICMJE and WHO compliant trial information disclosure: 30 July 2014).

  7. An Evaluation of a Teacher Training Program at the United States Holocaust Memorial Museum

    ERIC Educational Resources Information Center

    DeBerry, LaMonnia Edge

    2015-01-01

    The purpose of this mixed methods study was to explore the effects of the United States Holocaust Memorial Museum's work in partnering with professors from universities across the United States during a 1-year collaborative partnership through an educational program referred to as Belfer First Step Holocaust Institute for Teacher Educators (BFS…

  8. Past, Present and Future. Dull Knife Memorial College (Indian Action Program Inc.).

    ERIC Educational Resources Information Center

    1978

    Five vocational training programs as well as academic coursework are offered on the Northern Cheyenne Reservation by Dull Knife Memorial College. Established and operated by the Northern Cheyenne, and located in Lame Deer, Montana, the college was chartered by a tribal ordinance in 1975. Approximately 75 trainees are currently involved in the…

  9. Effects of Translation Methods in Imported Instructional Video Programs on Taiwan Fourth Graders' Memory.

    ERIC Educational Resources Information Center

    Tyan, Nay-ching Nancy; Hu, Yi-chain

    The purpose of this study was to investigate the effects of various translation methods used in imported instructional video programs on Taiwan elementary school students' visual and verbal memory. Following pretesting, 128 fourth grade students from an urban public elementary school in northern Taiwan participated. The students in 4 experimental…

  10. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    DOE PAGES

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less

  11. Latent change models of adult cognition: are changes in processing speed and working memory associated with changes in episodic memory?

    PubMed

    Hertzog, Christopher; Dixon, Roger A; Hultsch, David F; MacDonald, Stuart W S

    2003-12-01

    The authors used 6-year longitudinal data from the Victoria Longitudinal Study (VLS) to investigate individual differences in amount of episodic memory change. Latent change models revealed reliable individual differences in cognitive change. Changes in episodic memory were significantly correlated with changes in other cognitive variables, including speed and working memory. A structural equation model for the latent change scores showed that changes in speed and working memory predicted changes in episodic memory, as expected by processing resource theory. However, these effects were best modeled as being mediated by changes in induction and fact retrieval. Dissociations were detected between cross-sectional ability correlations and longitudinal changes. Shuffling the tasks used to define the Working Memory latent variable altered patterns of change correlations.

  12. A FPGA-based Measurement System for Nonvolatile Semiconductor Memory Characterization

    NASA Astrophysics Data System (ADS)

    Bu, Jiankang; White, Marvin

    2002-03-01

    Low voltage, long retention, high density SONOS nonvolatile semiconductor memory (NVSM) devices are ideally suited for PCMCIA, FLASH and 'smart' cards. The SONOS memory transistor requires characterization with an accurate, rapid measurement system with minimum disturbance to the device. The FPGA-based measurement system includes three parts: 1) a pattern generator implemented with XILINX FPGAs and corresponding software, 2) a high-speed, constant-current, threshold voltage detection circuit, 3) and a data evaluation program, implemented with a LABVIEW program. Fig. 1 shows the general block diagram of the FPGA-based measurement system. The function generator is designed and simulated with XILINX Foundation Software. Under the control of the specific erase/write/read pulses, the analog detect circuit applies operational modes to the SONOS device under test (DUT) and determines the change of the memory-state of the SONOS nonvolatile memory transistor. The TEK460 digitizes the analog threshold voltage output and sends to the PC computer. The data is filtered and averaged with a LABVIEWTM program running on the PC computer and displayed on the monitor in real time. We have implemented the pattern generator with XILINX FPGAs. Fig. 2 shows the block diagram of the pattern generator. We realized the logic control by a method of state machine design. Fig. 3 shows a small part of the state machine. The flexibility of the FPGAs enhances the capabilities of this system and allows measurement variations without hardware changes. The characterization of the nonvolatile memory transistor device under test (DUT), as function of programming voltage and time, is achieved by a high-speed, constant-current threshold voltage detection circuit. The analog detection circuit incorporating fast analog switches controlled digitally with the FPGAs. The schematic circuit diagram is shown in Fig. 4. The various operational modes for the DUT are realized with control signals applied to the analog switches (SW) as shown in Fig. 5. A LABVIEWTM program, on a PC platform, collects and processes the data. The data is displayed on the monitor in real time. This time-domain filtering reduces the digitizing error. Fig. 6 shows the data processing. SONOS nonvolatile semiconductor memories are characterized by erase/write, retention and endurance measurements. Fig. 7 shows the erase/write characteristics of an n-Channel, 5V prog-rammable SONOS memory transistor. Fig.8 shows the retention characteristic of the same SONOS transistor. We have used this system to characterize SONOS nonvolatile semiconductor memory transistors. The attractive features of the test system design lies in the cost-effectiveness and flexibility of the test pattern implementation, fast read-out of memory state, low power, high precision determination of the device threshold voltage, and perhaps most importantly, minimum disturbance, which is indispensable for nonvolatile memory characterization.

  13. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices.

    PubMed

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag 5 In 5 Sb 60 Te 30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  14. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag5In5Sb60Te30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  15. Neurons in the barrel cortex turn into processing whisker and odor signals: a cellular mechanism for the storage and retrieval of associative signals

    PubMed Central

    Wang, Dangui; Zhao, Jun; Gao, Zilong; Chen, Na; Wen, Bo; Lu, Wei; Lei, Zhuofan; Chen, Changfeng; Liu, Yahui; Feng, Jing; Wang, Jin-Hui

    2015-01-01

    Associative learning and memory are essential to logical thinking and cognition. How the neurons are recruited as associative memory cells to encode multiple input signals for their associated storage and distinguishable retrieval remains unclear. We studied this issue in the barrel cortex by in vivo two-photon calcium imaging, electrophysiology, and neural tracing in our mouse model that the simultaneous whisker and olfaction stimulations led to odorant-induced whisker motion. After this cross-modal reflex arose, the barrel and piriform cortices connected. More than 40% of barrel cortical neurons became to encode odor signal alongside whisker signal. Some of these neurons expressed distinct activity patterns in response to acquired odor signal and innate whisker signal, and others encoded similar pattern in response to these signals. In the meantime, certain barrel cortical astrocytes encoded odorant and whisker signals. After associative learning, the neurons and astrocytes in the sensory cortices are able to store the newly learnt signal (cross-modal memory) besides the innate signal (native-modal memory). Such associative memory cells distinguish the differences of these signals by programming different codes and signify the historical associations of these signals by similar codes in information retrievals. PMID:26347609

  16. Light programmable organic transistor memory device based on hybrid dielectric

    NASA Astrophysics Data System (ADS)

    Ren, Xiaochen; Chan, Paddy K. L.

    2013-09-01

    We have fabricated the transistor memory devices based on SiO2 and polystyrene (PS) hybrid dielectric. The trap states densities with different semiconductors have been investigated and a maximum 160V memory window between programming and erasing is realized. For DNTT based transistor, the trapped electron density is limited by the number of mobile electrons in semiconductor. The charge transport mechanism is verified by light induced Vth shift effect. Furthermore, in order to meet the low operating power requirement of portable electronic devices, we fabricated the organic memory transistor based on AlOx/self-assembly monolayer (SAM)/PS hybrid dielectric, the effective capacitance of hybrid dielectric is 210 nF cm-2 and the transistor can reach saturation state at -3V gate bias. The memory window in transfer I-V curve is around 1V under +/-5V programming and erasing bias.

  17. Scanpath memory binding: multiple read-out experiments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.; Privitera, Claudio M.; Yang, Huiyang; Azzariti, Michela; Ho, Yeuk F.; Chan, Angie; Krischer, Christof; Weinberger, Adam

    1999-05-01

    The scanpath theory proposed that an internal spatial- cognitive model controls perception and the active looking eye movements, EMs, of the scanpath sequence. Evidence for this came from new quantitative methods, experiments with ambiguous figures and visual imagery and from MRI studies, all on cooperating human subjects. Besides recording EMs, we introduce other experimental techniques wherein the subject must depend upon memory bindings as in visual imagery, but may call upon other motor behaviors than EMs to read-out the remembered patterns. How is the internal model distributed and operationally assembled. The concept of binding speaks to the assigning of values for the model and its execution in various parts of the brain. Current neurological information helps to localize different aspects of the spatial-cognitive model in the brain. We suppose that there are several levels of 'binding' -- semantic or symbolic binding, structural binding for the spatial locations of the regions-of-interest and sequential binding for the dynamic execution program that yields the sequence of EMs. Our aim is to dissect out respective contributions of these different forms of binding.

  18. Silver Memories: implementation and evaluation of a unique radio program for older people.

    PubMed

    Travers, Catherine; Bartlett, Helen P

    2011-03-01

    A unique radio program, Silver Memories, specifically designed to address social isolation and loneliness in older people by broadcasting music (primarily), serials and other programs relevant to the period when older people grew up--the 1920-1950s--first aired in Brisbane, Australia, in April 2008. The impact of the program upon older listeners' mood, quality of life (QOL) and self-reported loneliness was independently evaluated. One hundred and thirteen community-dwelling persons and residents of residential care facilities, aged 60 years and older participated in a three month evaluation of Silver Memories. They were asked to listen to the program daily and baseline and follow-up measures of depression, QOL and loneliness were obtained. Participants were also asked for their opinions regarding the program's quality and appeal. The results showed a statistically significant improvement in measures of depression and QOL from baseline to follow-up but there was no change on the measure of loneliness. The results did not vary by living situation (community vs. residential care), whether the participant was lonely or not lonely, socially isolated or not isolated, or whether there had been any important changes in the participant's health or social circumstances throughout the evaluation. It was concluded that listening to Silver Memories appears to improve the QOL and mood of older people and is an inexpensive intervention that is flexible and readily implemented.

  19. Working Memory in the Classroom: An Inside Look at the Central Executive.

    PubMed

    Barker, Lauren A

    2016-01-01

    This article provides a review of working memory and its application to educational settings. A discussion of the varying definitions of working memory is presented. Special attention is given to the various multidisciplinary professionals who work with students with working memory deficits, and their unique understanding of the construct. Definitions and theories of working memory are briefly summarized and provide the foundation for understanding practical applications of working memory to assessment and intervention. Although definitions and models of working memory abound, there is limited consensus regarding universally accepted definitions and models. Current research indicates that developing new models of working memory may be an appropriate paradigm shift at this time. The integration of individual practitioner's knowledge regarding academic achievement, working memory and processing speed could provide a foundation for the future development of new working memory models. Future directions for research should aim to explain how tasks and behaviors are supported by the substrates of the cortico-striatal and the cerebro-cerebellar systems. Translation of neurobiological information into educational contexts will be helpful to inform all practitioners' knowledge of working memory constructs. It will also allow for universally accepted definitions and models of working memory to arise and facilitate more effective collaboration between disciplines working in educational setting.

  20. User Preference-Based Dual-Memory Neural Model With Memory Consolidation Approach.

    PubMed

    Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Nasir, Jauwairia; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan

    2018-06-01

    Memory modeling has been a popular topic of research for improving the performance of autonomous agents in cognition related problems. Apart from learning distinct experiences correctly, significant or recurring experiences are expected to be learned better and be retrieved easier. In order to achieve this objective, this paper proposes a user preference-based dual-memory adaptive resonance theory network model, which makes use of a user preference to encode memories with various strengths and to learn and forget at various rates. Over a period of time, memories undergo a consolidation-like process at a rate proportional to the user preference at the time of encoding and the frequency of recall of a particular memory. Consolidated memories are easier to recall and are more stable. This dual-memory neural model generates distinct episodic memories and a flexible semantic-like memory component. This leads to an enhanced retrieval mechanism of experiences through two routes. The simulation results are presented to evaluate the proposed memory model based on various kinds of cues over a number of trials. The experimental results on Mybot are also presented. The results verify that not only are distinct experiences learned correctly but also that experiences associated with higher user preference and recall frequency are consolidated earlier. Thus, these experiences are recalled more easily relative to the unconsolidated experiences.

  1. Determining the Science, Agriculture and Natural Resource, and Youth Leadership Outcomes for Students Participating in an Innovative Middle School Agriscience Program

    ERIC Educational Resources Information Center

    Skelton, Peter; Stair, Kristin S.; Dormody, Tom; Vanleeuwen, Dawn

    2014-01-01

    The Memorial Middle School Agricultural Extension and Education Center (MMSAEEC) located in Las Vegas, New Mexico is a youth science center focusing on agriculture and natural resources. The purpose of this quasi-experimental study of the MMSAEEC teaching and learning model was to determine if differences exist in science achievement, agriculture…

  2. Highly Asynchronous VisitOr Queue Graph Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, R.

    2012-10-01

    HAVOQGT is a C++ framework that can be used to create highly parallel graph traversal algorithms. The framework stores the graph and algorithmic data structures on external memory that is typically mapped to high performance locally attached NAND FLASH arrays. The framework supports a vertex-centered visitor programming model. The frameworkd has been used to implement breadth first search, connected components, and single source shortest path.

  3. Modelling Effects on Grid Cells of Sensory Input During Self-motion

    DTIC Science & Technology

    2016-04-20

    input during self-motion Florian Raudies, James R. Hinman and Michael E. Hasselmo Center for Systems Neuroscience , Centre for Memory and Brain...Department of Psychological and Brain Sciences and Graduate Program for Neuroscience , Boston University, 2 Cummington Mall, Boston, MA 02215, USA Visual...Psychological and Brain Sciences and the Centre for Computational Neuroscience and Neural Technology before taking his current position as a Research

  4. Computerized Memory Training Leads to Sustained Improvement in Visuospatial Short-Term Memory Skills in Children with Down Syndrome

    ERIC Educational Resources Information Center

    Bennett, Stephanie J.; Holmes, Joni; Buckley, Sue

    2013-01-01

    This study evaluated the impact of a computerized visuospatial memory training intervention on the memory and behavioral skills of children with Down syndrome. Teaching assistants were trained to support the delivery of a computerized intervention program to individual children over a 10-16 week period in school. Twenty-one children aged 7-12…

  5. Ferroelectric memory evaluation and development system

    NASA Astrophysics Data System (ADS)

    Bondurant, David W.

    Attention is given to the Ramtron FEDS-1, an IBM PC/AT compatible single-board 16-b microcomputer with 8-kbyte program/data memory implemented with nonvolatile ferroelectric dynamic RAM. This is the first demonstration of a new type of solid state nonvolatile read/write memory, the ferroelectric RAM (FRAM). It is suggested that this memory technology will have a significant impact on avionics system performance and reliability.

  6. Randomized controlled trial of a healthy brain ageing cognitive training program: effects on memory, mood, and sleep.

    PubMed

    Diamond, Keri; Mowszowski, Loren; Cockayne, Nicole; Norrie, Louisa; Paradise, Matthew; Hermens, Daniel F; Lewis, Simon J G; Hickie, Ian B; Naismith, Sharon L

    2015-01-01

    With the rise in the ageing population and absence of a cure for dementia, cost-effective prevention strategies for those 'at risk' of dementia including those with depression and/or mild cognitive impairment are urgently required. This study evaluated the efficacy of a multifaceted Healthy Brain Ageing Cognitive Training (HBA-CT) program for older adults 'at risk' of dementia. Using a single-blinded design, 64 participants (mean age = 66.5 years, SD = 8.6) were randomized to an immediate treatment (HBA-CT) or treatment-as-usual control arm. The HBA-CT intervention was conducted twice-weekly for seven weeks and comprised group-based psychoeducation about cognitive strategies and modifiable lifestyle factors pertaining to healthy brain ageing, and computerized cognitive training. In comparison to the treatment-as-usual control arm, the HBA-CT program was associated with improvements in verbal memory (p = 0.03), self-reported memory (p = 0.03), mood (p = 0.01), and sleep (p = 0.01). While the improvements in memory (p = 0.03) and sleep (p = 0.02) remained after controlling for improvements in mood, only a trend in verbal memory improvement was apparent after controlling for sleep. The HBA-CT program improves cognitive, mood, and sleep functions in older adults 'at risk' of dementia, and therefore offers promise as a secondary prevention strategy.

  7. Model-Driven Study of Visual Memory

    DTIC Science & Technology

    2004-12-01

    dimensional stimuli (synthetic human faces ) afford important insights into episodic recognition memory. The results were well accommodated by a summed...the unusual properties of the z-transformed ROCS. 15. SUBJECT TERMS Memory, visual memory, computational model, human memory, faces , identity 16...3 Accomplishments/New Findings 3 Work on Objective One: Recognition Memory for Synthetic Faces . 3 Experim ent 1

  8. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  9. The scheme machine: A case study in progress in design derivation at system levels

    NASA Technical Reports Server (NTRS)

    Johnson, Steven D.

    1995-01-01

    The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.

  10. From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springmeyer, R; Still, C; Schulz, M

    2011-03-17

    Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focusedmore » on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them.« less

  11. Forgetfulness can help you win games.

    PubMed

    Burridge, James; Gao, Yu; Mao, Yong

    2015-09-01

    We present a simple game model where agents with different memory lengths compete for finite resources. We show by simulation and analytically that an instability exists at a critical memory length, and as a result, different memory lengths can compete and coexist in a dynamical equilibrium. Our analytical formulation makes a connection to statistical urn models, and we show that temperature is mirrored by the agent's memory. Our simple model of memory may be incorporated into other game models with implications that we briefly discuss.

  12. A program for undergraduate research into the mechanisms of sensory coding and memory decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calin-Jageman, R J

    This is the final technical report for this DOE project, entitltled "A program for undergraduate research into the mechanisms of sensory coding and memory decay". The report summarizes progress on the three research aims: 1) to identify phyisological and genetic correlates of long-term habituation, 2) to understand mechanisms of olfactory coding, and 3) to foster a world-class undergraduate neuroscience program. Progress on the first aim has enabled comparison of learning-regulated transcripts across closely related learning paradigms and species, and results suggest that only a small core of transcripts serve truly general roles in long-term memory. Progress on the second aimmore » has enabled testing of several mutant phenotypes for olfactory behaviors, and results show that responses are not fully consistent with the combinitoral coding hypothesis. Finally, 14 undergraduate students participated in this research, the neuroscience program attracted extramural funding, and we completed a successful summer program to enhance transitions for community-college students into 4-year colleges to persue STEM fields.« less

  13. Targeting latent function: encouraging effective encoding for successful memory training and transfer.

    PubMed

    Lustig, Cindy; Flegal, Kristin E

    2008-12-01

    Cognitive training programs for older adults often result in improvements at the group level. However, there are typically large age and individual differences in the size of training benefits. These differences may be related to the degree to which participants implement the processes targeted by the training program. To test this possibility, we tested older adults in a memory-training procedure either under specific strategy instructions designed to encourage semantic, integrative encoding, or in a condition that encouraged time and attention to encoding but allowed participants to choose their own strategy. Both conditions improved the performance of old-old adults relative to an earlier study (D. Bissig & C. Lustig, 2007) and reduced self-reports of everyday memory errors. Performance in the strategy-instruction group was related to preexisting ability; performance in the strategy?choice group was not. The strategy-choice group performed better on a laboratory transfer test of recognition memory, and training performance was correlated with reduced everyday memory errors. Training programs that target participants' latent but inefficiently used abilities while allowing flexibility in bringing those abilities to bear may best promote effective training and transfer. Copyright (c) 2009 APA, all rights reserved.

  14. What's Working in Working Memory Training? An Educational Perspective

    ERIC Educational Resources Information Center

    Redick, Thomas S.; Shipstead, Zach; Wiemers, Elizabeth A.; Melby-Lervåg, Monica; Hulme, Charles

    2015-01-01

    Working memory training programs have generated great interest, with claims that the training interventions can have profound beneficial effects on children's academic and intellectual attainment. We describe the criteria by which to evaluate evidence for or against the benefit of working memory training. Despite the promising results of initial…

  15. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-01-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  16. Memory access in shared virtual memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrendorf, R.

    1992-09-01

    Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.

  17. Error Characterization and Mitigation for 16Nm MLC NAND Flash Memory Under Total Ionizing Dose Effect

    NASA Technical Reports Server (NTRS)

    Li, Yue (Inventor); Bruck, Jehoshua (Inventor)

    2018-01-01

    A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.

  18. Rambrain - a library for virtually extending physical memory

    NASA Astrophysics Data System (ADS)

    Imgrund, Maximilian; Arth, Alexander

    2017-08-01

    We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.

  19. Conceptual design and feasibility evaluation model of a 10 to the 8th power bit oligatomic mass memory. Volume 2: Feasibility evaluation model

    NASA Technical Reports Server (NTRS)

    Horst, R. L.; Nordstrom, M. J.

    1972-01-01

    The partially populated oligatomic mass memory feasibility model is described and evaluated. A system was desired to verify the feasibility of the oligatomic (mirror) memory approach as applicable to large scale solid state mass memories.

  20. Oscillations in Spurious States of the Associative Memory Model with Synaptic Depression

    NASA Astrophysics Data System (ADS)

    Murata, Shin; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato

    2014-12-01

    The associative memory model is a typical neural network model that can store discretely distributed fixed-point attractors as memory patterns. When the network stores the memory patterns extensively, however, the model has other attractors besides the memory patterns. These attractors are called spurious memories. Both spurious states and memory states are in equilibrium, so there is little difference between their dynamics. Recent physiological experiments have shown that the short-term dynamic synapse called synaptic depression decreases its efficacy of transmission to postsynaptic neurons according to the activities of presynaptic neurons. Previous studies revealed that synaptic depression destabilizes the memory states when the number of memory patterns is finite. However, it is very difficult to study the dynamical properties of the spurious states if the number of memory patterns is proportional to the number of neurons. We investigate the effect of synaptic depression on spurious states by Monte Carlo simulation. The results demonstrate that synaptic depression does not affect the memory states but mainly destabilizes the spurious states and induces periodic oscillations.

  1. On Productive Knowledge and Levels of Questions.

    ERIC Educational Resources Information Center

    Andre, Thomas

    A model is proposed for memory that stresses a distinction between episodic memory for encoded personal experience and semantic memory for abstractors and generalizations. Basically, the model holds that questions influence the nature of memory representations formed during instruction, and that memory representation controls the way in which…

  2. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    NASA Astrophysics Data System (ADS)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  3. A Bayesian Model of the Memory Colour Effect.

    PubMed

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  4. A Bayesian Model of the Memory Colour Effect

    PubMed Central

    Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874

  5. The AIP Model of EMDR Therapy and Pathogenic Memories

    PubMed Central

    Hase, Michael; Balmaceda, Ute M.; Ostacoli, Luca; Liebermann, Peter; Hofmann, Arne

    2017-01-01

    Eye Movement Desensitization and Reprocessing (EMDR) therapy has been widely recognized as an efficacious treatment for post-traumatic stress disorder (PTSD). In the last years more insight has been gained regarding the efficacy of EMDR therapy in a broad field of mental disorders beyond PTSD. The cornerstone of EMDR therapy is its unique model of pathogenesis and change: the adaptive information processing (AIP) model. The AIP model developed by F. Shapiro has found support and differentiation in recent studies on the importance of memories in the pathogenesis of a range of mental disorders beside PTSD. However, theoretical publications or research on the application of the AIP model are still rare. The increasing acceptance of ideas that relate the origin of many mental disorders to the formation and consolidation of implicit dysfunctional memory lead to formation of the theory of pathogenic memories. Within the theory of pathogenic memories these implicit dysfunctional memories are considered to form basis of a variety of mental disorders. The theory of pathogenic memories seems compatible to the AIP model of EMDR therapy, which offers strategies to effectively access and transmute these memories leading to amelioration or resolution of symptoms. Merging the AIP model with the theory of pathogenic memories may initiate research. In consequence, patients suffering from such memory-based disorders may be earlier diagnosed and treated more effectively. PMID:28983265

  6. A Mathematical Model for the Hippocampus: Towards the Understanding of Episodic Memory and Imagination

    NASA Astrophysics Data System (ADS)

    Tsuda, I.; Yamaguti, Y.; Kuroda, S.; Fukushima, Y.; Tsukada, M.

    How does the brain encode episode? Based on the fact that the hippocampus is responsible for the formation of episodic memory, we have proposed a mathematical model for the hippocampus. Because episodic memory includes a time series of events, an underlying dynamics for the formation of episodic memory is considered to employ an association of memories. David Marr correctly pointed out in his theory of archecortex for a simple memory that the hippocampal CA3 is responsible for the formation of associative memories. However, a conventional mathematical model of associative memory simply guarantees a single association of memory unless a rule for an order of successive association of memories is given. The recent clinical studies in Maguire's group for the patients with the hippocampal lesion show that the patients cannot make a new story, because of the lack of ability of imagining new things. Both episodic memory and imagining things include various common characteristics: imagery, the sense of now, retrieval of semantic information, and narrative structures. Taking into account these findings, we propose a mathematical model of the hippocampus in order to understand the common mechanism of episodic memory and imagination.

  7. GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen

    2015-09-30

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less

  8. RC64, a Rad-Hard Many-Core High- Performance DSP for Space Applications

    NASA Astrophysics Data System (ADS)

    Ginosar, Ran; Aviely, Peleg; Gellis, Hagay; Liran, Tuvia; Israeli, Tsvika; Nesher, Roy; Lange, Fredy; Dobkin, Reuven; Meirov, Henri; Reznik, Dror

    2015-09-01

    RC64, a novel rad-hard 64-core signal processing chip targets DSP performance of 75 GMACs (16bit), 150 GOPS and 38 single precision GFLOPS while dissipating less than 10 Watts. RC64 integrates advanced DSP cores with a multi-bank shared memory and a hardware scheduler, also supporting DDR2/3 memory and twelve 3.125 Gbps full duplex high speed serial links using SpaceFibre and other protocols. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 300 MHz integrated circuit on a 65nm CMOS technology, assembled in hermetically sealed ceramic CCGA624 package and qualified to the highest space standards.

  9. RC64, a Rad-Hard Many-Core High-Performance DSP for Space Applications

    NASA Astrophysics Data System (ADS)

    Ginosar, Ran; Aviely, Peleg; Liran, Tuvia; Alon, Dov; Mandler, Alberto; Lange, Fredy; Dobkin, Reuven; Goldberg, Miki

    2014-08-01

    RC64, a novel rad-hard 64-core signal processing chip targets DSP performance of 75 GMACs (16bit), 150 GOPS and 20 single precision GFLOPS while dissipating less than 10 Watts. RC64 integrates advanced DSP cores with a multi-bank shared memory and a hardware scheduler, also supporting DDR2/3 memory and twelve 2.5 Gbps full duplex high speed serial links using SpaceFibre and other protocols. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 300 MHz integrated circuit on a 65nm CMOS technology, assembled in hermetically sealed ceramic CCGA624 package and qualified to the highest space standards.

  10. Efficient detection of dangling pointer error for C/C++ programs

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  11. Parallelization of Program to Optimize Simulated Trajectories (POST3D)

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.; Korte, John J. (Technical Monitor)

    2001-01-01

    This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.

  12. Effects of Visual Working Memory Training and Direct Instruction on Geometry Problem Solving in Students with Geometry Difficulties

    ERIC Educational Resources Information Center

    Zhang, Dake

    2017-01-01

    We examined the effectiveness of (a) a working memory (WM) training program and (b) a combination program involving both WM training and direct instruction for students with geometry difficulties (GD). Four students with GD participated. A multiple-baseline design across participants was employed. During the Phase 1, students received six sessions…

  13. Re-Living Dangerous Memories: Online Journaling to Interrogate Spaces of "Otherness" in an Educational Administration Program at a Midwestern University

    ERIC Educational Resources Information Center

    Friend, Jennifer; Caruthers, Loyce; McCarther, Shirley Marie

    2009-01-01

    This theoretical paper explores the use of online journaling in an educational administration program to interrogate spaces of "otherness"--the geographical spaces of cities where poor children and children of color live--and the dangerous memories prospective administrators may have about diversity. The cultures of most educational administration…

  14. A bio-inspired memory model for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhu, Yong

    2009-04-01

    Long-term structural health monitoring (SHM) systems need intelligent management of the monitoring data. By analogy with the way the human brain processes memories, we present a bio-inspired memory model (BIMM) that does not require prior knowledge of the structure parameters. The model contains three time-domain areas: a sensory memory area, a short-term memory area and a long-term memory area. First, the initial parameters of the structural state are specified to establish safety criteria. Then the large amount of monitoring data that falls within the safety limits is filtered while the data outside the safety limits are captured instantly in the sensory memory area. Second, disturbance signals are distinguished from danger signals in the short-term memory area. Finally, the stable data of the structural balance state are preserved in the long-term memory area. A strategy for priority scheduling via fuzzy c-means for the proposed model is then introduced. An experiment on bridge tower deformation demonstrates that the proposed model can be applied for real-time acquisition, limited-space storage and intelligent mining of the monitoring data in a long-term SHM system.

  15. Chemical Memory Reactions Induced Bursting Dynamics in Gene Expression

    PubMed Central

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems. PMID:23349679

  16. Chemical memory reactions induced bursting dynamics in gene expression.

    PubMed

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  17. [Anterograde declarative memory and its models].

    PubMed

    Barbeau, E-J; Puel, M; Pariente, J

    2010-01-01

    Patient H.M.'s recent death provides the opportunity to highlight the importance of his contribution to a better understanding of the anterograde amnesic syndrome. The thorough study of this patient over five decades largely contributed to shape the unitary model of declarative memory. This model holds that declarative memory is a single system that cannot be fractionated into subcomponents. As a system, it depends mainly on medial temporal lobes structures. The objective of this review is to present the main characteristics of different modular models that have been proposed as alternatives to the unitary model. It is also an opportunity to present different patients, who, although less famous than H.M., helped make signification contribution to the field of memory. The characteristics of the five main modular models are presented, including the most recent one (the perceptual-mnemonic model). The differences as well as how these models converge are highlighted. Different possibilities that could help reconcile unitary and modular approaches are considered. Although modular models differ significantly in many aspects, all converge to the notion that memory for single items and semantic memory could be dissociated from memory for complex material and context-rich episodes. In addition, these models converge concerning the involvement of critical brain structures for these stages: Item and semantic memory, as well as familiarity, are thought to largely depend on anterior subhippocampal areas, while relational, context-rich memory and recollective experiences are thought to largely depend on the hippocampal formation. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  18. Low-voltage-operated organic one-time programmable memory using printed organic thin-film transistors and antifuse capacitors.

    PubMed

    Jung, Soon-Won; Na, Bock Soon; Park, Chan Woo; Koo, Jae Bon

    2014-11-01

    We demonstrate an organic one-time programmable memory cell formed entirely at plastic-compatible temperatures. All the processes are performed at below 130 degrees C. Our memory cell consists of a printed organic transistor and an organic capacitor. Inkjet-printed organic transistors are fabricated by using high-k polymer dielectric blends comprising poly(vinylidenefluoride-trifluoroethylene) [P(VDF-TrFE)] and poly(methyl methacrylate) (PMMA) for low-voltage operation. P(NDI2OD-T2) transistors have a high field-effect mobility of 0.2 cm2/Vs and a low operation gate voltage of less than 10 V. The operation voltage effectively decreases owing to the high permittivity of the P(VDF-TrFE):PMMA blended film. The data in the memory cell are programmed by electrically breaking the organic capacitor. The organic capacitor acts like an antifuse capacitor, because it is initially open, and it becomes permanently short-circuited by applying a high voltage. The organic memory cells are programmed with 4 V, and they are read out with 2 V. The memory data are read out by sensing the current in the memory cell. The printed organic one-time programmable memory is suitable for applications storing small amount of data, such as low-cost radio-frequency identification (RFID) tag.

  19. Physiological, Molecular and Genetic Mechanisms of Long-Term Habituation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calin-Jageman, Robert J

    Work funded on this grant has explored the mechanisms of long-term habituation, a ubiquitous form of learning that plays a key role in basic cognitive functioning. Specifically, behavioral, physiological, and molecular mechanisms of habituation have been explored using a simple model system, the tail-elicited siphon-withdrawal reflex (T-SWR) in the marine mollusk Aplysia californica. Substantial progress has been made on the first and third aims, providing some fundamental insights into the mechanisms by which memories are stored. We have characterized the physiological correlates of short- and long-term habituation. We found that short-term habituation is accompanied by a robust sensory adaptation, whereasmore » long-term habituation is accompanied by alterations in sensory and interneuron synaptic efficacy. Thus, our data indicates memories can be shifted between different sites in a neural network as they are consolidated from short to long term. At the molecular level, we have accomplished microarray analysis comparing gene expression in both habituated and control ganglia. We have identified a network of putatively regulated transcripts that seems particularly targeted towards synaptic changes (e.g. SNAP25, calmodulin) . We are now beginning additional work to confirm regulation of these transcripts and build a more detailed understanding of the cascade of molecular events leading to the permanent storage of long-term memories. On the third aim, we have fostered a nascent neuroscience program via a variety of successful initiatives. We have funded over 11 undergraduate neuroscience scholars, several of whom have been recognized at national and regional levels for their research. We have also conducted a pioneering summer research program for community college students which is helping enhance access of underrepresented groups to life science careers. Despite minimal progress on the second aim, this project has provided a) novel insight into the network mechanisms by which short-term memories are permanently stored, and b) a strong foundation for continued growth of an excellent undergraduate neuroscience program.« less

  20. Crystallographic and general use programs for the XDS Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Snyder, R. L.

    1973-01-01

    Programs in basic FORTRAN 4 are described, which fall into three catagories: (1) interactive programs to be executed under time sharing (BTM); (2) non interactive programs which are executed in batch processing mode (BPM); and (3) large non interactive programs which require more memory than is available in the normal BPM/BTM operating system and must be run overnight on a special system called XRAY which releases about 45,000 words of memory to the user. Programs in catagories (1) and (2) are stored as FORTRAN source files in the account FSNYDER. Programs in catagory (3) are stored in the XRAY system as load modules. The type of file in account FSNYDER is identified by the first two letters in the name.

  1. The Neuroanatomical, Neurophysiological and Psychological Basis of Memory: Current Models and Their Origins

    PubMed Central

    Camina, Eduardo; Güell, Francisco

    2017-01-01

    This review aims to classify and clarify, from a neuroanatomical, neurophysiological, and psychological perspective, different memory models that are currently widespread in the literature as well as to describe their origins. We believe it is important to consider previous developments without which one cannot adequately understand the kinds of models that are now current in the scientific literature. This article intends to provide a comprehensive and rigorous overview for understanding and ordering the latest scientific advances related to this subject. The main forms of memory presented include sensory memory, short-term memory, and long-term memory. Information from the world around us is first stored by sensory memory, thus enabling the storage and future use of such information. Short-term memory (or memory) refers to information processed in a short period of time. Long-term memory allows us to store information for long periods of time, including information that can be retrieved consciously (explicit memory) or unconsciously (implicit memory). PMID:28713278

  2. The Neuroanatomical, Neurophysiological and Psychological Basis of Memory: Current Models and Their Origins.

    PubMed

    Camina, Eduardo; Güell, Francisco

    2017-01-01

    This review aims to classify and clarify, from a neuroanatomical, neurophysiological, and psychological perspective, different memory models that are currently widespread in the literature as well as to describe their origins. We believe it is important to consider previous developments without which one cannot adequately understand the kinds of models that are now current in the scientific literature. This article intends to provide a comprehensive and rigorous overview for understanding and ordering the latest scientific advances related to this subject. The main forms of memory presented include sensory memory, short-term memory, and long-term memory. Information from the world around us is first stored by sensory memory, thus enabling the storage and future use of such information. Short-term memory (or memory) refers to information processed in a short period of time. Long-term memory allows us to store information for long periods of time, including information that can be retrieved consciously (explicit memory) or unconsciously (implicit memory).

  3. Source Memory Rehabilitation: A Review Toward Recommendations for Setting Up a Strategy Training Aimed at the "What, Where, and When" of Episodic Retrieval.

    PubMed

    El Haj, Mohamad; Kessels, Roy P C; Allain, Philippe

    2016-01-01

    Source memory is a core component of episodic recall as it allows for the reconstruction of contextual details characterizing the acquisition of episodic events. Unlike episodic memory, little is known about source memory rehabilitation. Our review addresses this issue by emphasizing several strategies as useful tools in source memory rehabilitation programs. Four main strategies are likely to improve source recall in amnesic patients-namely, (a) contextual cueing, (b) unitization, (c) errorless learning, and (d) executive function programs. The rationale behind our suggestion is that: (a) reinstating contextual cues during retrieval can serve as retrieval cues and enhance source memory; (b) unitization as an encoding process allows for the integration of several pieces of contextual information into a new single entity; (c) errorless learning may prevent patients from making errors during source learning; and (d) as source memory deteriorations have been classically attributed to executive dysfunction, the rehabilitation of the latter ability is likely to maintain the former ability. Besides these four strategies, our review suggests several additional rehabilitation techniques such as the vanishing cues and spaced retrieval methods. Another additional strategy is the use of electronic devices. By gathering these strategies, our review provides a helpful guideline for clinicians dealing with source memory impairments. Our review further highlights the lack of randomized and controlled studies in the field of source memory rehabilitation.

  4. Novel conformal organic antireflective coatings for advanced I-line lithography

    NASA Astrophysics Data System (ADS)

    Deshpande, Shreeram V.; Nowak, Kelly A.; Fowler, Shelly; Williams, Paul; Arjona, Mikko

    2001-08-01

    Flash memory chips are playing a critical role in semiconductor devices due to increased popularity of hand held electronic communication devices such as cell phones and PDAs (personal Digital Assistants). Flash memory offers two primary advantages in semiconductor devices. First, it offers flexibility of in-circuit programming capability to reduce the loss from programming errors and to significantly reduce commercialization time to market for new devices. Second, flash memory has a double density memory capability through stacked gate structures which increases the memory capability and thus saves significantly on chip real estate. However, due to stacked gate structures the requirements for manufacturing of flash memory devices are significantly different from traditional memory devices. Stacked gate structures also offer unique challenges to lithographic patterning materials such as Bottom Anti-Reflective Coating (BARC) compositions used to achieve CD control and to minimize standing wave effect in photolithography. To be applicable in flash memory manufacturing a BARC should form a conformal coating on high topography of stacked gate features as well as provide the normal anti-reflection properties for CD control. In this paper we report on a new highly conformal advanced i-line BARC for use in design and manufacture of flash memory devices. Conformal BARCs being significantly thinner in trenches than the planarizing BARCs offer the advantage of reducing BARC overetch and thus minimizing resist thickness loss.

  5. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  6. Quiescence of Memory CD8(+) T Cells Is Mediated by Regulatory T Cells through Inhibitory Receptor CTLA-4.

    PubMed

    Kalia, Vandana; Penny, Laura Anne; Yuzefpolskiy, Yevgeniy; Baumann, Florian Martin; Sarkar, Surojit

    2015-06-16

    Immune memory cells are poised to rapidly expand and elaborate effector functions upon reinfection yet exist in a functionally quiescent state. The paradigm is that memory T cells remain inactive due to lack of T cell receptor (TCR) stimuli. Here, we report that regulatory T (Treg) cells orchestrate memory T cell quiescence by suppressing effector and proliferation programs through inhibitory receptor, cytotoxic-T-lymphocyte-associated protein-4 (CTLA-4). Loss of Treg cells resulted in activation of genome-wide transcriptional programs characteristic of effector T cells and drove transitioning as well as established memory CD8(+) T cells toward terminally differentiated KLRG-1(hi)IL-7Rα(lo)GzmB(hi) phenotype, with compromised metabolic fitness, longevity, polyfunctionality, and protective efficacy. CTLA-4 functionally replaced Treg cells in trans to rescue memory T cell defects and restore homeostasis. These studies present the CTLA-4-CD28-CD80/CD86 axis as a potential target to accelerate vaccine-induced immunity and improve T cell memory quality in current cancer immunotherapies proposing transient Treg cell ablation. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Failure of self-consistency in the discrete resource model of visual working memory.

    PubMed

    Bays, Paul M

    2018-06-03

    The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.

  8. Remembering the Past and Imagining the Future: A Neural Model of Spatial Memory and Imagery

    ERIC Educational Resources Information Center

    Byrne, Patrick; Becker, Suzanna; Burgess, Neil

    2007-01-01

    The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics…

  9. A Formal Model of Capacity Limits in Working Memory

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Kliegl, Reinhold

    2006-01-01

    A mathematical model of working-memory capacity limits is proposed on the key assumption of mutual interference between items in working memory. Interference is assumed to arise from overwriting of features shared by these items. The model was fit to time-accuracy data of memory-updating tasks from four experiments using nonlinear mixed effect…

  10. Foreign Language Methods and an Information Processing Model of Memory.

    ERIC Educational Resources Information Center

    Willebrand, Julia

    The major approaches to language teaching (audiolingual method, generative grammar, Community Language Learning and Silent Way) are investigated to discover whether or not they are compatible in structure with an information-processing model of memory (IPM). The model of memory used was described by Roberta Klatzky in "Human Memory:…

  11. A Multinomial Model of Event-Based Prospective Memory

    ERIC Educational Resources Information Center

    Smith, Rebekah E.; Bayen, Ute J.

    2004-01-01

    Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…

  12. The impact of modality and working memory capacity on achievement in a multimedia environment

    NASA Astrophysics Data System (ADS)

    Stromfors, Charlotte M.

    This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.

  13. Fucosyltransferase Induction during Influenza Virus Infection Is Required for the Generation of Functional Memory CD4+ T Cells

    PubMed Central

    Carrette, Florent; Henriquez, Monique L.; Fujita, Yu

    2018-01-01

    T cells mediating influenza viral control are instructed in lymphoid and nonlymphoid tissues to differentiate into memory T cells that confer protective immunity. The mechanisms by which influenza virus–specific memory CD4+ T cells arise have been attributed to changes in transcription factors, cytokines and cytokine receptors, and metabolic programming. The molecules involved in these biosynthetic pathways, including proteins and lipids, are modified to varying degrees of glycosylation, fucosylation, sialation, and sulfation, which can alter their function. It is currently unknown how the glycome enzymatic machinery regulates CD4+ T cell effector and memory differentiation. In a murine model of influenza virus infection, we found that fucosyltransferase enzymatic activity was induced in effector and memory CD4+ T cells. Using CD4+ T cells deficient in the Fut4/7 enzymes that are expressed only in hematopoietic cells, we found decreased frequencies of effector cells with reduced expression of T-bet and NKG2A/C/E in the lungs during primary infection. Furthermore, Fut4/7−/− effector CD4+ T cells had reduced survival with no difference in proliferation or capacity for effector function. Although Fut4/7−/− CD4+ T cells seeded the memory pool after primary infection, they failed to form tissue-resident cells, were dysfunctional, and were unable to re-expand after secondary infection. Our findings highlight an important regulatory axis mediated by cell-intrinsic fucosyltransferase activity in CD4+ T cell effectors that ensure the development of functional memory CD4+ T cells. PMID:29491007

  14. Computer-based cognitive retraining for adults with chronic acquired brain injury: a pilot study.

    PubMed

    Li, Kitsum; Robertson, Julie; Ramos, Joshua; Gella, Stephanie

    2013-10-01

    This study evaluated the effectiveness of a computer-based cognitive retraining (CBCR) program on improving memory and attention deficits in individuals with a chronic acquired brain injury (ABI). Twelve adults with a chronic ABI demonstrating deficits in memory and attention were recruited from a convenience sample from the community. Using a quasi-experimental one-group pretest-posttest design, a significant improvement was found in both memory and attention scores postintervention using the cognitive screening tool. This study supported the effectiveness of CBCR programs in improving cognitive deficits in memory and attention in individuals with chronic ABI. Further research is recommended to validate these findings with a larger ABI population and to investigate transfer to improvement in occupational performance that supports daily living skills.

  15. Implementations of BLAST for parallel computers.

    PubMed

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  16. Program scheme using common source lines in channel stacked NAND flash memory with layer selection by multilevel operation

    NASA Astrophysics Data System (ADS)

    Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook

    2018-02-01

    To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.

  17. Resource Isolation Method for Program’S Performance on CMP

    NASA Astrophysics Data System (ADS)

    Guan, Ti; Liu, Chunxiu; Xu, Zheng; Li, Huicong; Ma, Qiang

    2017-10-01

    Data center and cloud computing are more popular, which make more benefits for customers and the providers. However, in data center or clusters, commonly there is more than one program running on one server, but programs may interference with each other. The interference may take a little effect, however, the interference may cause serious drop down of performance. In order to avoid the performance interference problem, the mechanism of isolate resource for different programs is a better choice. In this paper we propose a light cost resource isolation method to improve program’s performance. This method uses Cgroups to set the dedicated CPU and memory resource for a program, aiming at to guarantee the program’s performance. There are three engines to realize this method: Program Monitor Engine top program’s resource usage of CPU and memory and transfer the information to Resource Assignment Engine; Resource Assignment Engine calculates the size of CPU and memory resource should be applied for the program; Cgroups Control Engine divide resource by Linux tool Cgroups, and drag program in control group for execution. The experiment result show that making use of the resource isolation method proposed by our paper, program’s performance can be improved.

  18. Plated wire random access memories

    NASA Technical Reports Server (NTRS)

    Gouldin, L. D.

    1975-01-01

    A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.

  19. Embodied Memory Judgments: A Case of Motor Fluency

    ERIC Educational Resources Information Center

    Yang, Shu-Ju; Gallo, David A.; Beilock, Sian L.

    2009-01-01

    It is well known that perceptual and conceptual fluency can influence episodic memory judgments. Here, the authors asked whether fluency arising from the motor system also impacts recognition memory. Past research has shown that the perception of letters automatically activates motor programs of typing actions in skilled typists. In this study,…

  20. Pre-Service Teachers' Juxtaposed Memories: Implications for Teacher Education

    ERIC Educational Resources Information Center

    Balli, Sandra J.

    2014-01-01

    Teacher education research has long understood that pre-service teachers' beliefs about teaching are well established by the time they enroll in a teacher education program. Based on the understanding that teacher memories help shape pre-service teachers' beliefs, teacher educators have sought ways to both honor such memories and facilitate a…

  1. The Effects of an Afterschool Physical Activity Program on Working Memory in Preadolescent Children

    ERIC Educational Resources Information Center

    Kamijo, Keita; Pontifex, Matthew B.; O'Leary, Kevin C.; Scudder, Mark R.; Wu, Chien-Ting; Castelli, Darla M.; Hillman, Charles H.

    2011-01-01

    The present study examined the effects of a 9-month randomized control physical activity intervention aimed at improving cardiorespiratory fitness on changes in working memory performance in preadolescent children relative to a waitlist control group. Participants performed a modified Sternberg task, which manipulated working memory demands based…

  2. General purpose programmable accelerator board

    DOEpatents

    Robertson, Perry J.; Witzke, Edward L.

    2001-01-01

    A general purpose accelerator board and acceleration method comprising use of: one or more programmable logic devices; a plurality of memory blocks; bus interface for communicating data between the memory blocks and devices external to the board; and dynamic programming capabilities for providing logic to the programmable logic device to be executed on data in the memory blocks.

  3. Reading Comprehension and Working Memory's Executive Processes: An Intervention Study in Primary School Students

    ERIC Educational Resources Information Center

    Garcia-Madruga, Juan A.; Elosua, Maria Rosa; Gil, Laura; Gomez-Veiga, Isabel; Vila, Jose Oscar; Orjales, Isabel; Contreras, Antonio; Rodriguez, Raquel; Melero, Maria Angeles; Duque, Gonzalo

    2013-01-01

    Reading comprehension is a highly demanding task that involves the simultaneous process of extracting and constructing meaning in which working memory's executive processes play a crucial role. In this article, a training program on working memory's executive processes to improve reading comprehension is presented and empirically tested in two…

  4. The Relative Success of a Self-Help and a Group-Based Memory Training Program for Older Adults

    PubMed Central

    Hastings, Erin C.; West, Robin L.

    2011-01-01

    This study evaluates self-help and group-based memory training programs to test for their differential impact on memory beliefs and performance. Self-help participants used a manual that presented strategies for name, story, and list recall and practice exercises. Matched content from that same manual was presented by the trainer in 2-hr weekly group sessions for the group-based trainees. Relative to a wait-list control group, most memory measures showed significant gains for both self-help and group-based training, with no significant training condition differences, and these gains were maintained at follow-up. Belief measures showed that locus of control was significantly higher for the self-help and group-based training than the control group; memory self-efficacy significantly declined for controls, increased for group-trained participants, and remained constant in the self-help group. Self-efficacy change in a self-help group may require more opportunities for interacting with peers and/or an instructor emphasizing one's potential for memory change. PMID:19739914

  5. The Effects of Emotion on Episodic Memory for TV Commercials.

    ERIC Educational Resources Information Center

    Thorson, Esther; Friestad, Marian

    Based on the associational nature of memory, the distinction between episodic and semantic memory, and the notion of memory strength, a model was developed of the role of emotion in the memory of television commercials. The model generated the following hypotheses: (1) emotional commercials will more likely be recalled than nonemotional…

  6. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  7. Working memory, situation models, and synesthesia

    DOE PAGES

    Radvansky, Gabriel A.; Gibson, Bradley S.; McNerney, M. Windy

    2013-03-04

    Research on language comprehension suggests a strong relationship between working memory span measures and language comprehension. However, there is also evidence that this relationship weakens at higher levels of comprehension, such as the situation model level. The current study explored this relationship by comparing 10 grapheme–color synesthetes who have additional color experiences when they read words that begin with different letters and 48 normal controls on a number of tests of complex working memory capacity and processing at the situation model level. On all tests of working memory capacity, the synesthetes outperformed the controls. Importantly, there was no carryover benefitmore » for the synesthetes for processing at the situation model level. This reinforces the idea that although some aspects of language comprehension are related to working memory span scores, this applies less directly to situation model levels. As a result, this suggests that theories of working memory must take into account this limitation, and the working memory processes that are involved in situation model construction and processing must be derived.« less

  8. Working memory, situation models, and synesthesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radvansky, Gabriel A.; Gibson, Bradley S.; McNerney, M. Windy

    Research on language comprehension suggests a strong relationship between working memory span measures and language comprehension. However, there is also evidence that this relationship weakens at higher levels of comprehension, such as the situation model level. The current study explored this relationship by comparing 10 grapheme–color synesthetes who have additional color experiences when they read words that begin with different letters and 48 normal controls on a number of tests of complex working memory capacity and processing at the situation model level. On all tests of working memory capacity, the synesthetes outperformed the controls. Importantly, there was no carryover benefitmore » for the synesthetes for processing at the situation model level. This reinforces the idea that although some aspects of language comprehension are related to working memory span scores, this applies less directly to situation model levels. As a result, this suggests that theories of working memory must take into account this limitation, and the working memory processes that are involved in situation model construction and processing must be derived.« less

  9. Memory complaints in epilepsy: An examination of the role of mood and illness perceptions.

    PubMed

    Tinson, Deborah; Crockford, Christopher; Gharooni, Sara; Russell, Helen; Zoeller, Sophie; Leavy, Yvonne; Lloyd, Rachel; Duncan, Susan

    2018-03-01

    The study examined the role of mood and illness perceptions in explaining the variance in the memory complaints of patients with epilepsy. Forty-four patients from an outpatient tertiary care center and 43 volunteer controls completed a formal assessment of memory and a verbal fluency test, as well as validated self-report questionnaires on memory complaints, mood, and illness perceptions. In hierarchical multiple regression analyses, objective memory test performance and verbal fluency did not contribute significantly to the variance in memory complaints for either patients or controls. In patients, illness perceptions and mood were highly correlated. Illness perceptions correlated more highly with memory complaints than mood and were therefore added to the multiple regression analysis. This accounted for an additional 25% of the variance, after controlling for objective memory test performance and verbal fluency, and the model was significant (model B). In order to compare with other studies, mood was added to a second model, instead of illness perceptions. This accounted for an additional 24% of the variance, which was again significant (model C). In controls, low mood accounted for 11% of the variance in memory complaints (model C2). A measure of illness perceptions was more highly correlated with the memory complaints of patients with epilepsy than with a measure of mood. In a hierarchical multiple regression model, illness perceptions accounted for 25% of the variance in memory complaints. Illness perceptions could provide useful information in a clinical investigation into the self-reported memory complaints of patients with epilepsy, alongside the assessment of mood and formal memory testing. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Development of the Ubiquitous Spaced Retrieval-Based Memory Advancement and Rehabilitation Training Program

    PubMed Central

    Han, Ji Won; Oh, Kyusoo; Yoo, Sooyoung; Kim, Eunhye; Ahn, Ki-Hwan; Son, Yeon-Joo; Kim, Tae Hui; Chi, Yeon Kyung

    2014-01-01

    Objective The Ubiquitous Spaced Retrieval-based Memory Advancement and Rehabilitation Training (USMART) program was developed by transforming the spaced retrieval-based memory training which consisted of 24 face-to-face sessions into a self-administered program with an iPAD app. The objective of this study was to evaluate the feasibility and efficacy of USMART in elderly subjects with mild cognitive impairment (MCI). Methods Feasibility was evaluated by checking the satisfaction of the participants with a 5-point Likert scale. The efficacy of the program on cognitive functions was evaluated by the Korean version of the Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Assessment Battery before and after USMART. Results Among the 10 participants, 7 completed both pre- and post-USMART assessments. The overall satisfaction score was 8.0±1.0 out of 10. The mean Word List Memory Test (WLMT) scores significantly increased after USMART training after adjusting for age, educational levels, baseline Mini-Mental Status Examination scores, and the number of training sessions (pre-USMART, 16.0±4.1; post-USMART, 17.9±4.5; p=0.014, RM-ANOVA). The magnitude of the improvements in the WLMT scores significantly correlated with the number of training sessions during 4 weeks (r=0.793; p=0.033). Conclusion USMART was effective in improving memory and was well tolerated by most participants with MCI, suggesting that it may be a convenient and cost-effective alternative for the cognitive rehabilitation of elderly subjects with cognitive impairments. Further studies with large numbers of participants are necessary to examine the relationship between the number of training sessions and the improvements in memory function. PMID:24605124

  11. Synaptic tagging, evaluation of memories, and the distal reward problem.

    PubMed

    Päpper, Marc; Kempter, Richard; Leibold, Christian

    2011-01-01

    Long-term synaptic plasticity exhibits distinct phases. The synaptic tagging hypothesis suggests an early phase in which synapses are prepared, or "tagged," for protein capture, and a late phase in which those proteins are integrated into the synapses to achieve memory consolidation. The synapse specificity of the tags is consistent with conventional neural network models of associative memory. Memory consolidation through protein synthesis, however, is neuron specific, and its functional role in those models has not been assessed. Here, using a theoretical network model, we test the tagging hypothesis on its potential to prolong memory lifetimes in an online-learning paradigm. We find that protein synthesis, though not synapse specific, prolongs memory lifetimes if it is used to evaluate memory items on a cellular level. In our model we assume that only "important" memory items evoke protein synthesis such that these become more stable than "unimportant" items, which do not evoke protein synthesis. The network model comprises an equilibrium distribution of synaptic states that is very susceptible to the storage of new items: Most synapses are in a state in which they are plastic and can be changed easily, whereas only those synapses that are essential for the retrieval of the important memory items are in the stable late phase. The model can solve the distal reward problem, where the initial exposure of a memory item and its evaluation are temporally separated. Synaptic tagging hence provides a viable mechanism to consolidate and evaluate memories on a synaptic basis.

  12. Global Neural Pattern Similarity as a Common Basis for Categorization and Recognition Memory

    PubMed Central

    Xue, Gui; Love, Bradley C.; Preston, Alison R.; Poldrack, Russell A.

    2014-01-01

    Familiarity, or memory strength, is a central construct in models of cognition. In previous categorization and long-term memory research, correlations have been found between psychological measures of memory strength and activation in the medial temporal lobes (MTLs), which suggests a common neural locus for memory strength. However, activation alone is insufficient for determining whether the same mechanisms underlie neural function across domains. Guided by mathematical models of categorization and long-term memory, we develop a theory and a method to test whether memory strength arises from the global similarity among neural representations. In human subjects, we find significant correlations between global similarity among activation patterns in the MTLs and both subsequent memory confidence in a recognition memory task and model-based measures of memory strength in a category learning task. Our work bridges formal cognitive theories and neuroscientific models by illustrating that the same global similarity computations underlie processing in multiple cognitive domains. Moreover, by establishing a link between neural similarity and psychological memory strength, our findings suggest that there may be an isomorphism between psychological and neural representational spaces that can be exploited to test cognitive theories at both the neural and behavioral levels. PMID:24872552

  13. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  14. Activation and Binding in Verbal Working Memory: A Dual-Process Model for the Recognition of Nonwords

    ERIC Educational Resources Information Center

    Oberauer, Klauss; Lange, Elke B.

    2009-01-01

    The article presents a mathematical model of short-term recognition based on dual-process models and the three-component theory of working memory [Oberauer, K. (2002). Access to information in working memory: Exploring the focus of attention. "Journal of Experimental Psychology: Learning, Memory, and Cognition, 28", 411-421]. Familiarity arises…

  15. A new model for CD8+ T cell memory inflation based upon a recombinant adenoviral vector1

    PubMed Central

    Bolinger, Beatrice; Sims, Stuart; O’Hara, Geraldine; de Lara, Catherine; Tchilian, Elma; Firner, Sonja; Engeler, Daniel; Ludewig, Burkhard; Klenerman, Paul

    2013-01-01

    CD8+ T cell memory inflation, first described in murine cytomegalovirus (MCMV) infection, is characterized by the accumulation of high-frequency, functional antigen-specific CD8+ T cell pools with an effector-memory phenotype and enrichment in peripheral organs. Although persistence of antigen is considered essential, the rules underpinning memory inflation are still unclear. The MCMV model is, however, complicated by the virus’s low-level persistence, and stochastic reactivation. We developed a new model of memory inflation based upon a βgal-recombinant adenovirus vector (Ad-LacZ). After i.v. administration in C57BL/6 mice we observe marked memory inflation in the βgal96 epitope, while a second epitope, βgal497, undergoes classical memory formation. The inflationary T cell responses show kinetics, distribution, phenotype and functions similar to those seen in MCMV and are reproduced using alternative routes of administration. Memory inflation in this model is dependent on MHC Class II. As in MCMV, only the inflating epitope showed immunoproteasome-independence. These data define a new model for memory inflation, which is fully replication-independent, internally controlled and reproduces the key immunologic features of the CD8+ T cell response. This model provides insight into the mechanisms responsible for memory inflation, and since it is based on a vaccine vector, also is relevant to novel T cell-inducing vaccines in humans. PMID:23509359

  16. Parallelization of elliptic solver for solving 1D Boussinesq model

    NASA Astrophysics Data System (ADS)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  17. Generalized memory associativity in a network model for the neuroses

    NASA Astrophysics Data System (ADS)

    Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.

    2009-03-01

    We review concepts introduced in earlier work, where a neural network mechanism describes some mental processes in neurotic pathology and psychoanalytic working-through, as associative memory functioning, according to the findings of Freud. We developed a complex network model, where modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's idea that consciousness is related to symbolic and linguistic memory activity in the brain. We have introduced a generalization of the Boltzmann machine to model memory associativity. Model behavior is illustrated with simulations and some of its properties are analyzed with methods from statistical mechanics.

  18. Goal-Driven Autonomy and Robust Architecture for Long-Duration Missions (Year 1: 1 July 2013 - 31 July 2014)

    DTIC Science & Technology

    2014-09-30

    Mental Domain = Ω Goal Management goal change goal input World =Ψ Memory Mission & Goals( ) World Model (-Ψ) Episodic Memory Semantic Memory ...Activations Trace Meta-Level Control Introspective Monitoring Memory Reasoning Trace ( ) Strategies Episodic Memory Metaknowledge Self Model...it is from incorrect or missing memory associations (i.e., indices). Similarly, correct information may exist in the input stream, but may not be

  19. Long-term memory and volatility clustering in high-frequency price changes

    NASA Astrophysics Data System (ADS)

    oh, Gabjin; Kim, Seunghwan; Eom, Cheoljun

    2008-02-01

    We studied the long-term memory in diverse stock market indices and foreign exchange rates using Detrended Fluctuation Analysis (DFA). For all high-frequency market data studied, no significant long-term memory property was detected in the return series, while a strong long-term memory property was found in the volatility time series. The possible causes of the long-term memory property were investigated using the return data filtered by the AR(1) model, reflecting the short-term memory property, the GARCH(1,1) model, reflecting the volatility clustering property, and the FIGARCH model, reflecting the long-term memory property of the volatility time series. The memory effect in the AR(1) filtered return and volatility time series remained unchanged, while the long-term memory property diminished significantly in the volatility series of the GARCH(1,1) filtered data. Notably, there is no long-term memory property, when we eliminate the long-term memory property of volatility by the FIGARCH model. For all data used, although the Hurst exponents of the volatility time series changed considerably over time, those of the time series with the volatility clustering effect removed diminish significantly. Our results imply that the long-term memory property of the volatility time series can be attributed to the volatility clustering observed in the financial time series.

  20. Social Influences in Rehabilitation Planning: Blueprint for the 21st Century. A Report of the Mary E. Switzer Memorial Seminar (9th, New York, NY, November, 1984).

    ERIC Educational Resources Information Center

    Perlman, Leonard G., Ed.; Austin, Gary F., Ed.

    The monograph resulted from the Switzer Memorial Seminar which brought together 20 persons recognized for their achievements in rehabilitation. Five action papers formed the foundation for the program: "Social Influences on Rehabilitation: Introductory Remarks" (E. Berkowitz); "Trends and Demographic Studies on Programs for Disabled Persons" (L.…

  1. Experiences modeling ocean circulation problems on a 30 node commodity cluster with 3840 GPU processor cores.

    NASA Astrophysics Data System (ADS)

    Hill, C.

    2008-12-01

    Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.

  2. Deformation rate-, hold time-, and cycle-dependent shape-memory performance of Veriflex-E resin

    NASA Astrophysics Data System (ADS)

    McClung, Amber J. W.; Tandon, Gyaneshwar P.; Baur, Jeffery W.

    2013-02-01

    Shape-memory polymers have attracted great interest in recent years for application in reconfigurable structures (for instance morphing aircraft, micro air vehicles, and deployable space structures). However, before such applications can be attempted, the mechanical behavior of the shape-memory polymers must be thoroughly understood. The present study represents an assessment of viscous effects during multiple shape-memory cycles of Veriflex-E, an epoxy-based, thermally triggered shape-memory polymer resin. The experimental program is designed to explore the influence of multiple thermomechanical cycles on the shape-memory performance of Veriflex-E. The effects of the deformation rate and hold times at elevated temperature on the shape-memory behavior are also investigated.

  3. Formation of model-free motor memories during motor adaptation depends on perturbation schedule.

    PubMed

    Orban de Xivry, Jean-Jacques; Lefèvre, Philippe

    2015-04-01

    Motor adaptation to an external perturbation relies on several mechanisms such as model-based, model-free, strategic, or repetition-dependent learning. Depending on the experimental conditions, each of these mechanisms has more or less weight in the final adaptation state. Here we focused on the conditions that lead to the formation of a model-free motor memory (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787-801, 2011), i.e., a memory that does not depend on an internal model or on the size or direction of the errors experienced during the learning. The formation of such model-free motor memory was hypothesized to depend on the schedule of the perturbation (Orban de Xivry JJ, Ahmadi-Pajouh MA, Harran MD, Salimpour Y, Shadmehr R. J Neurophysiol 109: 124-136, 2013). Here we built on this observation by directly testing the nature of the motor memory after abrupt or gradual introduction of a visuomotor rotation, in an experimental paradigm where the presence of model-free motor memory can be identified (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787-801, 2011). We found that relearning was faster after abrupt than gradual perturbation, which suggests that model-free learning is reduced during gradual adaptation to a visuomotor rotation. In addition, the presence of savings after abrupt introduction of the perturbation but gradual extinction of the motor memory suggests that unexpected errors are necessary to induce a model-free motor memory. Overall, these data support the hypothesis that different perturbation schedules do not lead to a more or less stabilized motor memory but to distinct motor memories with different attributes and neural representations. Copyright © 2015 the American Physiological Society.

  4. Formation of model-free motor memories during motor adaptation depends on perturbation schedule

    PubMed Central

    Lefèvre, Philippe

    2015-01-01

    Motor adaptation to an external perturbation relies on several mechanisms such as model-based, model-free, strategic, or repetition-dependent learning. Depending on the experimental conditions, each of these mechanisms has more or less weight in the final adaptation state. Here we focused on the conditions that lead to the formation of a model-free motor memory (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787–801, 2011), i.e., a memory that does not depend on an internal model or on the size or direction of the errors experienced during the learning. The formation of such model-free motor memory was hypothesized to depend on the schedule of the perturbation (Orban de Xivry JJ, Ahmadi-Pajouh MA, Harran MD, Salimpour Y, Shadmehr R. J Neurophysiol 109: 124–136, 2013). Here we built on this observation by directly testing the nature of the motor memory after abrupt or gradual introduction of a visuomotor rotation, in an experimental paradigm where the presence of model-free motor memory can be identified (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787–801, 2011). We found that relearning was faster after abrupt than gradual perturbation, which suggests that model-free learning is reduced during gradual adaptation to a visuomotor rotation. In addition, the presence of savings after abrupt introduction of the perturbation but gradual extinction of the motor memory suggests that unexpected errors are necessary to induce a model-free motor memory. Overall, these data support the hypothesis that different perturbation schedules do not lead to a more or less stabilized motor memory but to distinct motor memories with different attributes and neural representations. PMID:25673736

  5. A Framework for Cognitive Interventions Targeting Everyday Memory Performance and Memory Self-efficacy

    PubMed Central

    McDougall, Graham J.

    2009-01-01

    The human brain has the potential for self-renewal through adult neurogenesis, which is the birth of new neurons. Neural plasticity implies that the nervous system can change and grow. This understanding has created new possibilities for cognitive enhancement and rehabilitation. However, as individuals age, they have decreased confidence, or memory self-efficacy, which is directly related to their everyday memory performance. In this article, a developmental account of studies about memory self-efficacy and nonpharmacologic cognitive intervention models is presented and a cognitive intervention model, called the cognitive behavioral model of everyday memory, is proposed. PMID:19065089

  6. Retrieval-induced NMDA receptor-dependent Arc expression in two models of cocaine-cue memory.

    PubMed

    Alaghband, Yasaman; O'Dell, Steven J; Azarnia, Siavash; Khalaj, Anna J; Guzowski, John F; Marshall, John F

    2014-12-01

    The association of environmental cues with drugs of abuse results in persistent drug-cue memories. These memories contribute significantly to relapse among addicts. While conditioned place preference (CPP) is a well-established paradigm frequently used to examine the modulation of drug-cue memories, very few studies have used the non-preference-based model conditioned activity (CA) for this purpose. Here, we used both experimental approaches to investigate the neural substrates of cocaine-cue memories. First, we directly compared, in a consistent setting, the involvement of cortical and subcortical brain regions in cocaine-cue memory retrieval by quantifying activity-regulated cytoskeletal-associated (Arc) protein expression in both the CPP and CA models. Second, because NMDA receptor activation is required for Arc expression, we investigated the NMDA receptor dependency of memory persistence using the CA model. In both the CPP and CA models, drug-paired animals showed significant increases in Arc immunoreactivity in regions of the frontal cortex and amygdala compared to unpaired controls. Additionally, administration of a NMDA receptor antagonist (MK-801 or memantine) immediately after cocaine-CA memory reactivation impaired the subsequent conditioned locomotion associated with the cocaine-paired environment. The enhanced Arc expression evident in a subset of corticolimbic regions after retrieval of a cocaine-context memory, observed in both the CPP and CA paradigms, likely signifies that these regions: (i) are activated during retrieval of these memories irrespective of preference-based decisions, and (ii) undergo neuroplasticity in order to update information about cues previously associated with cocaine. This study also establishes the involvement of NMDA receptors in maintaining memories established using the CA model, a characteristic previously demonstrated using CPP. Overall, these results demonstrate the utility of the CA model for studies of cocaine-context memory and suggest the involvement of an NMDA receptor-dependent Arc induction pathway in drug-cue memory interference. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Modeling soil moisture memory in savanna ecosystems

    NASA Astrophysics Data System (ADS)

    Gou, S.; Miller, G. R.

    2011-12-01

    Antecedent soil conditions create an ecosystem's "memory" of past rainfall events. Such soil moisture memory effects may be observed over a range of timescales, from daily to yearly, and lead to feedbacks between hydrological and ecosystem processes. In this study, we modeled the soil moisture memory effect on savanna ecosystems in California, Arizona, and Africa, using a system dynamics model created to simulate the ecohydrological processes at the plot-scale. The model was carefully calibrated using soil moisture and evapotranspiration data collected at three study sites. The model was then used to simulate scenarios with various initial soil moisture conditions and antecedent precipitation regimes, in order to study the soil moisture memory effects on the evapotranspiration of understory and overstory species. Based on the model results, soil texture and antecedent precipitation regime impact the redistribution of water within soil layers, potentially causing deeper soil layers to influence the ecosystem for a longer time. Of all the study areas modeled, soil moisture memory of California savanna ecosystem site is replenished and dries out most rapidly. Thus soil moisture memory could not maintain the high rate evapotranspiration for more than a few days without incoming rainfall event. On the contrary, soil moisture memory of Arizona savanna ecosystem site lasts the longest time. The plants with different root depths respond to different memory effects; shallow-rooted species mainly respond to the soil moisture memory in the shallow soil. The growing season of grass is largely depended on the soil moisture memory of the top 25cm soil layer. Grass transpiration is sensitive to the antecedent precipitation events within daily to weekly timescale. Deep-rooted plants have different responses since these species can access to the deeper soil moisture memory with longer time duration Soil moisture memory does not have obvious impacts on the phenology of woody plants, as these can maintain transpiration for a longer time even through the top soil layer dries out.

  8. Retrieval-induced NMDA receptor-dependent Arc expression in two models of cocaine-cue memory

    PubMed Central

    Alaghband, Yasaman; O'Dell, Steven J.; Azarnia, Siavash; Khalaj, Anna J.; Guzowski, John F.; Marshall, John F.

    2014-01-01

    The association of environmental cues with drugs of abuse results in persistent drug-cue memories. These memories contribute significantly to relapse among addicts. While conditioned place preference (CPP) is a well-established paradigm frequently used to examine the modulation of drug-cue memories, very few studies have used the non-preference-based model conditioned activity (CA) for this purpose. Here, we used both experimental approaches to investigate the neural substrates of cocaine-cue memories. First, we directly compared, in a consistent setting, the involvement of cortical and subcortical brain regions in cocaine-cue memory retrieval by quantifying activity-regulated cytoskeletal associated gene (Arc) protein expression in both the CPP and CA models. Second, because NMDA receptor activation is required for Arc expression, we investigated the NMDA receptor dependency of memory persistence using the CA model. In both the CPP and CA models, drug-paired animals showed significant increases in Arc immunoreactivity in regions of the frontal cortex and amygdala compared to unpaired controls. Additionally, administration of a NMDA receptor antagonist (MK-801 or memantine) immediately after cocaine-CA memory reactivation impaired the subsequent conditioned locomotion associated with the cocaine-paired environment. The enhanced Arc expression evident in a subset of corticolimbic regions after retrieval of a cocaine-context memory, observed in both the CPP and CA paradigms, likely signifies that these regions: (i) are activated during retrieval of these memories irrespective of preference-based decisions, and (ii) undergo neuroplasticity in order to update information about cues previously associated with cocaine. This study also establishes the involvement of NMDA receptors in maintaining memories established using the CA model, a characteristic previously demonstrated using CPP. Overall, these results demonstrate the utility of the CA model for studies of cocaine-context memory and suggest the involvement of an NMDA receptor-dependent Arc induction pathway in drug-cue memory interference. PMID:25225165

  9. Is the Link from Working Memory to Analogy Causal? No Analogy Improvements following Working Memory Training Gains

    PubMed Central

    Richey, J. Elizabeth; Phillips, Jeffrey S.; Schunn, Christian D.; Schneider, Walter

    2014-01-01

    Analogical reasoning has been hypothesized to critically depend upon working memory through correlational data [1], but less work has tested this relationship through experimental manipulation [2]. An opportunity for examining the connection between working memory and analogical reasoning has emerged from the growing, although somewhat controversial, body of literature suggests complex working memory training can sometimes lead to working memory improvements that transfer to novel working memory tasks. This study investigated whether working memory improvements, if replicated, would increase analogical reasoning ability. We assessed participants’ performance on verbal and visual analogy tasks after a complex working memory training program incorporating verbal and spatial tasks [3], [4]. Participants’ improvements on the working memory training tasks transferred to other short-term and working memory tasks, supporting the possibility of broad effects of working memory training. However, we found no effects on analogical reasoning. We propose several possible explanations for the lack of an impact of working memory improvements on analogical reasoning. PMID:25188356

  10. MicroRNA-21 preserves the fibrotic mechanical memory of mesenchymal stem cells

    NASA Astrophysics Data System (ADS)

    Li, Chen Xi; Talele, Nilesh P.; Boo, Stellar; Koehler, Anne; Knee-Walden, Ericka; Balestrini, Jenna L.; Speight, Pam; Kapus, Andras; Hinz, Boris

    2017-03-01

    Expansion on stiff culture substrates activates pro-fibrotic cell programs that are retained by mechanical memory. Here, we show that priming on physiologically soft silicone substrates suppresses fibrogenesis and desensitizes mesenchymal stem cells (MSCs) against subsequent mechanical activation in vitro and in vivo, and identify the microRNA miR-21 as a long-term memory keeper of the fibrogenic program in MSCs. During stiff priming, miR-21 levels were gradually increased by continued regulation through the acutely mechanosensitive myocardin-related transcription factor-A (MRTF-A/MLK-1) and remained high over 2 weeks after removal of the mechanical stimulus. Knocking down miR-21 once by the end of the stiff-priming period was sufficient to erase the mechanical memory and sensitize MSCs to subsequent exposure to soft substrates. Soft priming and erasing mechanical memory following cell culture expansion protects MSCs from fibrogenesis in the host wound environment and increases the chances for success of MSC therapy in tissue-repair applications.

  11. Hippocampal atrophy and memory dysfunction associated with physical inactivity in community-dwelling elderly subjects: The Sefuri study.

    PubMed

    Hashimoto, Manabu; Araki, Yuko; Takashima, Yuki; Nogami, Kohjiro; Uchino, Akira; Yuzuriha, Takefumi; Yao, Hiroshi

    2017-02-01

    Physical inactivity is one of the modifiable risk factors for hippocampal atrophy and Alzheimer's disease. We investigated the relationship between physical activity, hippocampal atrophy, and memory using structural equation modeling (SEM). We examined 213 community-dwelling elderly subjects (99 men and 114 women with a mean age of 68.9 years) without dementia or clinically apparent depression. All participants underwent Mini-Mental State Examination (MMSE) and Rivermead Behavioral Memory Test (RBMT). Physical activities were assessed with a structured questionnaire. We evaluated the degree of hippocampal atrophy (z-score-referred to as ZAdvance hereafter), using a free software program-the voxel-based specific regional analysis system for Alzheimer's disease (VSRAD) based on statistical parametric mapping 8 plus Diffeomorphic Anatomical Registration Through an Exponentiated Lie algebra. Routine magnetic resonance imaging findings were as follows: silent brain infarction, n  = 24 (11.3%); deep white matter lesions, n  = 72 (33.8%); periventricular hyperintensities, n  = 35 (16.4%); and cerebral microbleeds, n  = 14 (6.6%). Path analysis based on SEM indicated that the direct paths from leisure-time activity to hippocampal atrophy (β = -.18, p  < .01) and from hippocampal atrophy to memory dysfunction (RBMT) (β = -.20, p  < .01) were significant. Direct paths from "hippocampus" gray matter volume to RBMT and MMSE were highly significant, while direct paths from "whole brain" gray matter volume to RBMT and MMSE were not significant. The presented SEM model fit the data reasonably well. Based on the present SEM analysis, we found that hippocampal atrophy was associated with age and leisure-time physical inactivity, and hippocampal atrophy appeared to cause memory dysfunction, although we are unable to infer a causal or temporal association between hippocampal atrophy and memory dysfunction from the present observational study.

  12. The basis of distinctive IL-2- and IL-15-dependent signaling: weak CD122-dependent signaling favors CD8+ T central-memory cell survival but not T effector-memory cell development.

    PubMed

    Castro, Iris; Yu, Aixin; Dee, Michael J; Malek, Thomas R

    2011-11-15

    Recent work suggests that IL-2 and IL-15 induce distinctive levels of signaling through common receptor subunits and that such varied signaling directs the fate of Ag-activated CD8(+) T cells. In this study, we directly examined proximal signaling by IL-2 and IL-15 and CD8(+) T cell primary and memory responses as a consequence of varied CD122-dependent signaling. Initially, IL-2 and IL-15 induced similar p-STAT5 and p-S6 activation, but these activities were only sustained by IL-2. Transient IL-15-dependent signaling is due to limited expression of IL-15Rα. To investigate the outcome of varied CD122 signaling for CD8(+) T cell responses in vivo, OT-I T cells were used from mouse models where CD122 signals were attenuated by mutations within the cytoplasmic tail of CD122 or intrinsic survival function was provided in the absence of CD122 expression by transgenic Bcl-2. In the absence of CD122 signaling, generally normal primary response occurred, but the primed CD8(+) T cells were not maintained. In marked contrast, weak CD122 signaling supported development and survival of T central-memory (T(CM)) but not T effector-memory (T(EM)) cells. Transgenic expression of Bcl-2 in CD122(-/-) CD8(+) T cells also supported the survival and persistence of T(CM) cells but did not rescue T(EM) development. These data indicate that weak CD122 signals readily support T(CM) development largely through providing survival signals. However, stronger signals, independent of Bcl-2, are required for T(EM) development. Our findings are consistent with a model whereby low, intermediate, and high CD122 signaling support T(CM) memory survival, T(EM) programming, and terminal T effector cell differentiation, respectively.

  13. YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste

    Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less

  14. Concept of dynamic memory in economics

    NASA Astrophysics Data System (ADS)

    Tarasova, Valentina V.; Tarasov, Vasily E.

    2018-02-01

    In this paper we discuss a concept of dynamic memory and an application of fractional calculus to describe the dynamic memory. The concept of memory is considered from the standpoint of economic models in the framework of continuous time approach based on fractional calculus. We also describe some general restrictions that can be imposed on the structure and properties of dynamic memory. These restrictions include the following three principles: (a) the principle of fading memory; (b) the principle of memory homogeneity on time (the principle of non-aging memory); (c) the principle of memory reversibility (the principle of memory recovery). Examples of different memory functions are suggested by using the fractional calculus. To illustrate an application of the concept of dynamic memory in economics we consider a generalization of the Harrod-Domar model, where the power-law memory is taken into account.

  15. Multilevel Resistance Programming in Conductive Bridge Resistive Memory

    NASA Astrophysics Data System (ADS)

    Mahalanabis, Debayan

    This work focuses on the existence of multiple resistance states in a type of emerging non-volatile resistive memory device known commonly as Programmable Metallization Cell (PMC) or Conductive Bridge Random Access Memory (CBRAM), which can be important for applications such as multi-bit memory as well as non-volatile logic and neuromorphic computing. First, experimental data from small signal, quasi-static and pulsed mode electrical characterization of such devices are presented which clearly demonstrate the inherent multi-level resistance programmability property in CBRAM devices. A physics based analytical CBRAM compact model is then presented which simulates the ion-transport dynamics and filamentary growth mechanism that causes resistance change in such devices. Simulation results from the model are fitted to experimental dynamic resistance switching characteristics. The model designed using Verilog-a language is computation-efficient and can be integrated with industry standard circuit simulation tools for design and analysis of hybrid circuits involving both CMOS and CBRAM devices. Three main circuit applications for CBRAM devices are explored in this work. Firstly, the susceptibility of CBRAM memory arrays to single event induced upsets is analyzed via compact model simulation and experimental heavy ion testing data that show possibility of both high resistance to low resistance and low resistance to high resistance transitions due to ion strikes. Next, a non-volatile sense amplifier based flip-flop architecture is proposed which can help make leakage power consumption negligible by allowing complete shutdown of power supply while retaining its output data in CBRAM devices. Reliability and energy consumption of the flip-flop circuit for different CBRAM low resistance levels and supply voltage values are analyzed and compared to CMOS designs. Possible extension of this architecture for threshold logic function computation using the CBRAM devices as re-configurable resistive weights is also discussed. Lastly, Spike timing dependent plasticity (STDP) based gradual resistance change behavior in CBRAM device fabricated in back-end-of-line on a CMOS die containing integrate and fire CMOS neuron circuits is demonstrated for the first time which indicates the feasibility of using CBRAM devices as electronic synapses in spiking neural network hardware implementations for non-Boolean neuromorphic computing.

  16. Effects of Working Memory Training on Reading in Children with Special Needs

    ERIC Educational Resources Information Center

    Dahlin, Karin I. E.

    2011-01-01

    This study examines the relationship between working memory and reading achievement in 57 Swedish primary-school children with special needs. First, it was examined whether children's working memory could be enhanced by a cognitive training program, and how the training outcomes would relate to their reading development. Next, it was explored how…

  17. Using Instructional and Motivational Techniques in the Art Classroom To Increase Memory Retention.

    ERIC Educational Resources Information Center

    Calverley, Ann; Grafer, Bonnie; Hauser, Michelle

    This report describes a program for improving memory retention through instructional and motivational techniques in elementary art. Targeted population consisted of third grade students at three sites in a middle class suburb of a large midwestern city. The problems of memory retention were documented through teacher pre-surveys and art memory…

  18. Prospective memory: A comparative perspective

    PubMed Central

    Crystal, Jonathon D.; Wilson, A. George

    2014-01-01

    Prospective memory consists of forming a representation of a future action, temporarily storing that representation in memory, and retrieving it at a future time point. Here we review the recent development of animal models of prospective memory. We review experiments using rats that focus on the development of time-based and event-based prospective memory. Next, we review a number of prospective-memory approaches that have been used with a variety of non-human primates. Finally, we review selected approaches from the human literature on prospective memory to identify targets for development of animal models of prospective memory. PMID:25101562

  19. A model for memory systems based on processing modes rather than consciousness.

    PubMed

    Henke, Katharina

    2010-07-01

    Prominent models of human long-term memory distinguish between memory systems on the basis of whether learning and retrieval occur consciously or unconsciously. Episodic memory formation requires the rapid encoding of associations between different aspects of an event which, according to these models, depends on the hippocampus and on consciousness. However, recent evidence indicates that the hippocampus mediates rapid associative learning with and without consciousness in humans and animals, for long-term and short-term retention. Consciousness seems to be a poor criterion for differentiating between declarative (or explicit) and non declarative (or implicit) types of memory. A new model is therefore required in which memory systems are distinguished based on the processing operations involved rather than by consciousness.

  20. Thermodynamic Model of Spatial Memory

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Allen, P.

    1998-03-01

    We develop and test a thermodynamic model of spatial memory. Our model is an application of statistical thermodynamics to cognitive science. It is related to applications of the statistical mechanics framework in parallel distributed processes research. Our macroscopic model allows us to evaluate an entropy associated with spatial memory tasks. We find that older adults exhibit higher levels of entropy than younger adults. Thurstone's Law of Categorical Judgment, according to which the discriminal processes along the psychological continuum produced by presentations of a single stimulus are normally distributed, is explained by using a Hooke spring model of spatial memory. We have also analyzed a nonlinear modification of the ideal spring model of spatial memory. This work is supported by NIH/NIA grant AG09282-06.

Top