Sample records for memory management system

  1. Flash memory management system and method utilizing multiple block list windows

    NASA Technical Reports Server (NTRS)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  2. Extended memory management under RTOS

    NASA Technical Reports Server (NTRS)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  3. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  4. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  5. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  6. VOP memory management in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2001-03-01

    MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.

  7. Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File

    DTIC Science & Technology

    2008-12-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across

  8. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  9. Method and apparatus for managing access to a memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less

  10. Configurable memory system and method for providing atomic counting operations in a memory device

    DOEpatents

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  11. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  12. Optical mass memory system (AMM-13). AMM/DBMS interface control document

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.

  13. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  14. 78 FR 23866 - Airworthiness Directives; the Boeing Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... operational software in the cabin management system, and loading new software into the mass memory card. The...-200 and -300 series airplanes. The proposed AD would have required installing new operational software in the cabin management system, and loading new software into the mass memory card. Since the...

  15. Hard Real-Time: C++ Versus RTSJ

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Reinholtz, William K.

    2004-01-01

    In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.

  16. Implementation of relational data base management systems on micro-computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less

  17. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  18. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  19. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  20. Research on memory management in embedded systems

    NASA Astrophysics Data System (ADS)

    Huang, Xian-ying; Yang, Wu

    2005-12-01

    Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.

  1. Common Problems of Documentary Information Transfer, Storage and Retrieval in Industrial Organizations.

    ERIC Educational Resources Information Center

    Vickers, P. H.

    1983-01-01

    Examination of management information systems of three manufacturing firms highlights principal characteristics, document types and functions, main information flows, storage and retrieval systems, and common problems (corporate memory failure, records management, management information systems, general management). A literature review and…

  2. Reprogrammable field programmable gate array with integrated system for mitigating effects of single event upsets

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2010-01-01

    An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.

  3. Detailed Design and Implementation of a Multiprogramming Operating System for Sixteen-Bit Microprocessors.

    DTIC Science & Technology

    1983-12-01

    4 Multiuser Support ...... .......... 11-5 User Interface . .. .. ................ .. 11- 7 Inter -user Communications ................ 11- 7 Memory...user will greatly help facilitate the learning process. Inter -User Communication The inter -user communications of the operating system can be done using... inter -user communications would be met by using one or both of them. AMemory and File Management Memory and file management is concerned with four basic

  4. Architecture of security management unit for safe hosting of multiple agents

    NASA Astrophysics Data System (ADS)

    Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques

    1999-04-01

    In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.

  5. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    NASA Astrophysics Data System (ADS)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  6. Things to come: postmodern digital knowledge management and medical informatics.

    PubMed

    Matheson, N W

    1995-01-01

    The overarching informatics grand challenge facing society is the creation of knowledge management systems that can acquire, conserve, organize, retrieve, display, and distribute what is known today in a manner that informs and educates, facilitates the discovery and creation of new knowledge, and contributes to the health and welfare of the planet. At one time the private, national, and university libraries of the world collectively constituted the memory of society's intellectual history. In the future, these new digital knowledge management systems will constitute human memory in its entirety. The current model of multiple local collections of duplicated resources will give way to specialized sole-source servers. In this new environment all scholarly scientific knowledge should be public domain knowledge: managed by scientists, organized for the advancement of knowledge, and readily available to all. Over the next decade, the challenge for the field of medical informatics and for the libraries that serve as the continuous memory for the biomedical sciences will be to come together to form a new organization that will lead to the development of postmodern digital knowledge management systems for medicine. These systems will form a portion of the evolving world brain of the 21st century.

  7. Investigation and design of a Project Management Decision Support System for the 4950th Test Wing.

    DTIC Science & Technology

    1986-03-01

    all decision makers is the need for memory aids (reports, hand written notes, mental memory joggers, etc.). 4. Even in similar decision making ... memories to synthesize a decision- making process based on their individual styles, skills, and knowledge (Sprague, 1982: 106). Control mechanisms...representations shown in Figures 4.9 and 4.10 provide a means to this objective. By enabling a manager to make and record reasonable changes to

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Bender, S.R.

    Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are firstmore » presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.« less

  9. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  10. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  11. AFTOMS Technology Issues and Alternatives Report

    DTIC Science & Technology

    1989-12-01

    color , resolu- power requirements, physi- tion; memory , processor speed; cal and weather rugged- IAN interfaces, etc,) f,: these ness. display...Telephone and Telegraph 3 CD-I Compact Disk - Interactive CD-ROM Compact Disk-Read Only Memory CGM Computer Graphics Metafile CNWDI Critical Nuclear...Database Management System RFP Request For Proposal 3 RFS Remote File System ROM Read Only Memory 3 S SA-ALC San Antonio Air Logistics Center 3 SAC

  12. Memory handling in the ATLAS submission system from job definition to sites limits

    NASA Astrophysics Data System (ADS)

    Forti, A. C.; Walker, R.; Maeno, T.; Love, P.; Rauschmayr, N.; Filipcic, A.; Di Girolamo, A.

    2017-10-01

    In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

  13. Memory Forensics: Review of Acquisition and Analysis Techniques

    DTIC Science & Technology

    2013-11-01

    Management Overview Processes running on modern multitasking operating systems operate on an abstraction of RAM, called virtual memory [7]. In these systems...information such as user names, email addresses and passwords [7]. Analysts also use tools such as WinHex to identify headers or other suspicious data within

  14. The NEEDS Data Base Management and Archival Mass Memory System

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.

    1980-01-01

    A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.

  15. Rambrain - a library for virtually extending physical memory

    NASA Astrophysics Data System (ADS)

    Imgrund, Maximilian; Arth, Alexander

    2017-08-01

    We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.

  16. Efficient accesses of data structures using processing near memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayasena, Nuwan S.; Zhang, Dong Ping; Diez, Paula Aguilera

    Systems, apparatuses, and methods for implementing efficient queues and other data structures. A queue may be shared among multiple processors and/or threads without using explicit software atomic instructions to coordinate access to the queue. System software may allocate an atomic queue and corresponding queue metadata in system memory and return, to the requesting thread, a handle referencing the queue metadata. Any number of threads may utilize the handle for accessing the atomic queue. The logic for ensuring the atomicity of accesses to the atomic queue may reside in a management unit in the memory controller coupled to the memory wheremore » the atomic queue is allocated.« less

  17. Using the Change Manager Model for the Hippocampal System to Predict Connectivity and Neurophysiological Parameters in the Perirhinal Cortex

    PubMed Central

    Coward, L. Andrew; Gedeon, Tamas D.

    2016-01-01

    Theoretical arguments demonstrate that practical considerations, including the needs to limit physiological resources and to learn without interference with prior learning, severely constrain the anatomical architecture of the brain. These arguments identify the hippocampal system as the change manager for the cortex, with the role of selecting the most appropriate locations for cortical receptive field changes at each point in time and driving those changes. This role results in the hippocampal system recording the identities of groups of cortical receptive fields that changed at the same time. These types of records can also be used to reactivate the receptive fields active during individual unique past events, providing mechanisms for episodic memory retrieval. Our theoretical arguments identify the perirhinal cortex as one important focal point both for driving changes and for recording and retrieving episodic memories. The retrieval of episodic memories must not drive unnecessary receptive field changes, and this consideration places strong constraints on neuron properties and connectivity within and between the perirhinal cortex and regular cortex. Hence the model predicts a number of such properties and connectivity. Experimental test of these falsifiable predictions would clarify how change is managed in the cortex and how episodic memories are retrieved. PMID:26819594

  18. Advanced Development of Certified OS Kernels

    DTIC Science & Technology

    2015-06-01

    It provides an infrastructure to map a physical page into multiple processes’ page maps in different address spaces. Their ownership mechanism ensures...of their shared memory infrastructure . Trap module The trap module specifies the behaviors of exception handlers and mCertiKOS system calls. In...layers), 1 pm for the shared memory infrastructure (3 layers), 3.5 pm for the thread management (10 layers), 1 pm for the process management (4 layers

  19. Living Design Memory: Framework, Implementation, Lessons Learned.

    ERIC Educational Resources Information Center

    Terveen, Loren G.; And Others

    1995-01-01

    Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…

  20. Sptrace

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Sptrace is a general-purpose space utilization tracing system that is conceptually similar to the commercial Purify product used to detect leaks and other memory usage errors. It is designed to monitor space utilization in any sort of heap, i.e., a region of data storage on some device (nominally memory; possibly shared and possibly persistent) with a flat address space. This software can trace usage of shared and/or non-volatile storage in addition to private RAM (random access memory). Sptrace is implemented as a set of C function calls that are invoked from within the software that is being examined. The function calls fall into two broad classes: (1) functions that are embedded within the heap management software [e.g., JPL's SDR (Simple Data Recorder) and PSM (Personal Space Management) systems] to enable heap usage analysis by populating a virtual time-sequenced log of usage activity, and (2) reporting functions that are embedded within the application program whose behavior is suspect. For ease of use, these functions may be wrapped privately inside public functions offered by the heap management software. Sptrace can be used for VxWorks or RTEMS realtime systems as easily as for Linux or OS/X systems.

  1. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  2. Avionics Architecture Standards as an Approach to Obsolescence Management

    DTIC Science & Technology

    2000-10-01

    and goals is one method of system. The term System Architecture refers to a achieving the necessary critical mass of skilled and consistent set of such...Processing Module (GPM), Mass Memory Module executed on the modules within an ASAAC system will (MMM) and Power Conversion Module (PCM). be stored in a central...location, the Mass Memory * MOS -Module Support Layer to Operating System Module (MMM). Therefore, if modules are to be The purpose of the MOS

  3. Creative Classroom Assignment Through Database Management.

    ERIC Educational Resources Information Center

    Shah, Vivek; Bryant, Milton

    1987-01-01

    The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)

  4. Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control

    PubMed Central

    Loft, Shayne; Smith, Rebekah E.; Remington, Roger

    2015-01-01

    Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825

  5. Memory Management of Multimedia Services in Smart Homes

    NASA Astrophysics Data System (ADS)

    Kamel, Ibrahim; Muhaureq, Sanaa A.

    Nowadays there is a wide spectrum of applications that run in smart home environments. Consequently, home gateway, which is a central component in the smart home, must manage many applications despite limited memory resources. OSGi is a middleware standard for home gateways. OSGi models services as dependent components. Moreover, these applications might differ in their importance. Services collaborate and complement each other to achieve the required results. This paper addresses the following problem: given a home gateway that hosts several applications with different priorities and arbitrary dependencies among them. When the gateway runs out of memory, which application or service will be stopped or kicked out of memory to start a new service. Note that stopping a given service means that all the services that depend on it will be stopped too. Because of the service dependencies, traditional memory management techniques, in the operating system literatures might not be efficient. Our goal is to stop the least important and the least number of services. The paper presents a novel algorithm for home gateway memory management. The proposed algorithm takes into consideration the priority of the application and dependencies between different services, in addition to the amount of memory occupied by each service. We implement the proposed algorithm and performed many experiments to evaluate its performance and execution time. The proposed algorithm is implemented as a part of the OSGi framework (Open Service Gateway initiative). We used best fit and worst fit as yardstick to show the effectiveness of the proposed algorithm.

  6. Expert system shell to reason on large amounts of data

    NASA Technical Reports Server (NTRS)

    Giuffrida, Gionanni

    1994-01-01

    The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.

  7. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  8. Digital Equipment Corporation VAX/VMS Version 4.3

    DTIC Science & Technology

    1986-07-30

    operating system performs process-oriented paging that allows execution of programs that may be larger than the physical memory allocated to them... to higher privileged modes. (For an explanation of how the four access modes provide memory access protection see page 9, "Memory Management".) A... to optimize program performance for real-time applications or interactive environments. July 30, 1986 - 4 - Final Evaluation Report Digital VAX/VMS

  9. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  10. Resource Management Scheme Based on Ubiquitous Data Analysis

    PubMed Central

    Lee, Heung Ki; Jung, Jaehee

    2014-01-01

    Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692

  11. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs).

    PubMed

    Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo

    2018-02-01

    Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.

  12. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  13. Distributed Sensor Networks

    DTIC Science & Technology

    1979-09-30

    University, Pittsburgh, Pennsylvania (1976). 14. R. L. Kirby, "ULISP for PDP-11s with Memory Management ," Report MCS-76-23763, University of Maryland...teletVpe or 9 raphIc S output. The recor iuL, po , uitist il so mon itot its owvn ( Onmand queue and a( knowlede commands Sent to It hN the UsCtr interfa I...kernel. By a net- work kernel we mean a multicomputer distributed operating system kernel that includes proces- sor schedulers, "core" memory managers , and

  14. A bio-inspired memory model for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhu, Yong

    2009-04-01

    Long-term structural health monitoring (SHM) systems need intelligent management of the monitoring data. By analogy with the way the human brain processes memories, we present a bio-inspired memory model (BIMM) that does not require prior knowledge of the structure parameters. The model contains three time-domain areas: a sensory memory area, a short-term memory area and a long-term memory area. First, the initial parameters of the structural state are specified to establish safety criteria. Then the large amount of monitoring data that falls within the safety limits is filtered while the data outside the safety limits are captured instantly in the sensory memory area. Second, disturbance signals are distinguished from danger signals in the short-term memory area. Finally, the stable data of the structural balance state are preserved in the long-term memory area. A strategy for priority scheduling via fuzzy c-means for the proposed model is then introduced. An experiment on bridge tower deformation demonstrates that the proposed model can be applied for real-time acquisition, limited-space storage and intelligent mining of the monitoring data in a long-term SHM system.

  15. Generic, Type-Safe and Object Oriented Computer Algebra Software

    NASA Astrophysics Data System (ADS)

    Kredel, Heinz; Jolly, Raphael

    Advances in computer science, in particular object oriented programming, and software engineering have had little practical impact on computer algebra systems in the last 30 years. The software design of existing systems is still dominated by ad-hoc memory management, weakly typed algorithm libraries and proprietary domain specific interactive expression interpreters. We discuss a modular approach to computer algebra software: usage of state-of-the-art memory management and run-time systems (e.g. JVM) usage of strongly typed, generic, object oriented programming languages (e.g. Java) and usage of general purpose, dynamic interactive expression interpreters (e.g. Python) To illustrate the workability of this approach, we have implemented and studied computer algebra systems in Java and Scala. In this paper we report on the current state of this work by presenting new examples.

  16. Cultural resource applications for a GIS: Stone conservation at Jefferson and Lincoln Memorials

    USGS Publications Warehouse

    Joly, Kyle; Donald, Tony; Comer, Douglas

    1998-01-01

    Geographical information systems are rapidly becoming essential tools for land management. They provide a way to link landscape features to the wide variety of information that managers must consider when formulating plans for a site, designing site improvement and restoration projects, determining maintenance projects and protocols, and even interpreting the site. At the same time, they can be valuable research tools.Standing structures offer a different sort of geography, even though a humanly contrived one. Therefore, the capability of a geographical information system (GIS) to link geographical units to the information pertinent to the site and resource management can be employed in the management of standing structures. This was the idea that inspired the use of a GIS software, ArcView, to link computer aided design CAD) drawings of the Jefferson and Lincoln Memorials with inventories of the stones in the memorials. Both the CAD drawings and the inventory were in existence; what remained to be done was to modify the CAD files and place the inventory in an appropriately designed computerized database, and then to link the two in a GIS project. This work was carried out at the NPS Denver Service Center, Resource Planning Group, Applied Archaeology Center (DSC-RPG-AAC), in Silver Spring, Maryland, with the assistance of US/ICOMOS summer interns Katja Marasovic (Croatia) and Rastislav Gromnica (Slovakia), under the supervision of AAC office manager Douglas Comer. Project guidance was provided by Tony Donald, the Denver Service Center (DSC) project architect for the restoration of the Jefferson and Lincoln Memorials, and GIS consultation services by Kyle Joly.

  17. Structural dissociation and its resolution among Holocaust survivors: a qualitative research study.

    PubMed

    Auerbach, Carl F; Mirvis, Shoshana; Stern, Susan; Schwartz, Jonathan

    2009-01-01

    This qualitative study investigated how Holocaust survivors managed to lead "normal" lives after experiencing incomprehensible horror. It was based on structural dissociation theory (O. Van der Hart, E. R. S. Nijenhuis, & K. Steele, 2006), which postulates that when people encounter traumatic events that they cannot integrate into their ongoing mental lives, their personalities may divide into 2 distinct action systems: the apparently normal part of the personality (ANP; involving systems that manage functions of daily life) and the emotional part of the personality (EP; involving systems related to the traumatic memory). Failure to integrate also leads to nonrealization of the traumatic experience. Research participants were 20 people randomly selected from the U.S. Holocaust Memorial Museum's oral history archives. Their interviews were analyzed in terms of structural dissociation and nonrealization in order to develop a narrative about the stages of their post-war lives. In the 1st stage (Surviving the Camps: Formation of Traumatic Memories), the experience of surviving the camps created traumatic emotional memories. In the 2nd stage (Post-War Adjustment: Creating the ANP by Splitting Off the Traumatic Memories Into an EP), survivors' desire to create a normal post-war life led them to split off their traumatic memories. In the 3rd stage (Developing the Motivation to Remember), survivors' changed life context motivated them to confront the previously split-off material. In the 4th stage (Creating a Historical Self: Integration of the ANP and EP), survivors integrated past experience into their lives, although the impact of the trauma never fully disappeared.

  18. Knowledge information management toolkit and method

    DOEpatents

    Hempstead, Antoinette R.; Brown, Kenneth L.

    2006-08-15

    A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.

  19. Chip architecture - A revolution brewing

    NASA Astrophysics Data System (ADS)

    Guterl, F.

    1983-07-01

    Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.

  20. Tonic Inhibitory Control of Dentate Gyrus Granule Cells by α5-Containing GABAA Receptors Reduces Memory Interference.

    PubMed

    Engin, Elif; Zarnowska, Ewa D; Benke, Dietmar; Tsvetkov, Evgeny; Sigal, Maksim; Keist, Ruth; Bolshakov, Vadim Y; Pearce, Robert A; Rudolph, Uwe

    2015-10-07

    Interference between similar or overlapping memories formed at different times poses an important challenge on the hippocampal declarative memory system. Difficulties in managing interference are at the core of disabling cognitive deficits in neuropsychiatric disorders. Computational models have suggested that, in the normal brain, the sparse activation of the dentate gyrus granule cells maintained by tonic inhibitory control enables pattern separation, an orthogonalization process that allows distinct representations of memories despite interference. To test this mechanistic hypothesis, we generated mice with significantly reduced expression of the α5-containing GABAA (α5-GABAARs) receptors selectively in the granule cells of the dentate gyrus (α5DGKO mice). α5DGKO mice had reduced tonic inhibition of the granule cells without any change in fast phasic inhibition and showed increased activation in the dentate gyrus when presented with novel stimuli. α5DGKO mice showed impairments in cognitive tasks characterized by high interference, without any deficiencies in low-interference tasks, suggesting specific impairment of pattern separation. Reduction of fast phasic inhibition in the dentate gyrus through granule cell-selective knock-out of α2-GABAARs or the knock-out of the α5-GABAARs in the downstream CA3 area did not detract from pattern separation abilities, which confirms the anatomical and molecular specificity of the findings. In addition to lending empirical support to computational hypotheses, our findings have implications for the treatment of interference-related cognitive symptoms in neuropsychiatric disorders, particularly considering the availability of pharmacological agents selectively targeting α5-GABAARs. Interference between similar memories poses a significant limitation on the hippocampal declarative memory system, and impaired interference management is a cognitive symptom in many disorders. Thus, understanding mechanisms of successful interference management or processes that can lead to interference-related memory problems has high theoretical and translational importance. This study provides empirical evidence that tonic inhibition in the dentate gyrus (DG), which maintains sparseness of neuronal activation in the DG, is essential for management of interference. The specificity of findings to tonic, but not faster, more transient types of neuronal inhibition and to the DG, but not the neighboring brain areas, is presented through control experiments. Thus, the findings link interference management to a specific mechanism, proposed previously by computational models. Copyright © 2015 the authors 0270-6474/15/3513699-15$15.00/0.

  1. The hippocampal system as the cortical resource manager: a model connecting psychology, anatomy and physiology.

    PubMed

    Coward, L Andrew

    2010-01-01

    A model is described in which the hippocampal system functions as resource manager for the neocortex. This model is developed from an architectural concept for the brain as a whole within which the receptive fields of neocortical columns can gradually expand but with some limited exceptions tend not to contract. The definition process for receptive fields is constrained so that they overlap as little as possible, and change as little as possible, but at least a minimum number of columns detect their fields within every sensory input state. Below this minimum, the receptive fields of some columns are expanded slightly until the minimum level is reached. The columns in which this expansion occurs are selected by a competitive process in the hippocampal system that identifies those in which only a relatively small expansion is required, and sends signals to those columns that trigger the expansion. These expansions in receptive fields are the information record that forms the declarative memory of the input state. Episodic memory activates a set of columns in which receptive fields expanded simultaneously at some point in the past, and the hippocampal system is therefore the appropriate source for information guiding access to such memories. Semantic memory associates columns that are often active (with or without expansions in receptive fields) simultaneously. Initially, the hippocampus can guide access to such memories on the basis of initial information recording, but to avoid corruption of the information needed for ongoing resource management, access control shifts to other parts of the neocortex. The roles of the mammillary bodies, amygdala and anterior thalamic nucleus can be understood as modulating information recording in accordance with various behavioral priorities. During sleep, provisional physical connectivity is created that supports receptive field expansions in the subsequent wake period, but previously created memories are not affected. This model matches a wide range of neuropsychological observation better than alternative hippocampal models. The information mechanisms required by the model are consistent with known brain anatomy and neuron physiology.

  2. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  3. The Effects of Split-Attention and Redundancy on Cognitive Load When Learning Cognitive and Psychomotor Tasks

    ERIC Educational Resources Information Center

    Pociask, Fredrick D.; Morrison, Gary

    2004-01-01

    Human working memory can be defined as a component system responsible for the temporary storage and manipulation of information related to higher level cognitive behaviors, such as understanding and reasoning (Baddeley, 1992; Becker & Morris, 1999). Working memory, while able to manage a complex array of cognitive activities, presents with an…

  4. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  5. Radiative bistability and thermal memory.

    PubMed

    Kubytskyi, Viacheslav; Biehs, Svend-Age; Ben-Abdallah, Philippe

    2014-08-15

    We predict the existence of a thermal bistability in many-body systems out of thermal equilibrium which exchange heat by thermal radiation using insulator-metal transition materials. We propose a writing-reading procedure and demonstrate the possibility to exploit the thermal bistability to make a volatile thermal memory. We show that this thermal memory can be used to store heat and thermal information (via an encoding temperature) for arbitrary long times. The radiative thermal bistability could find broad applications in the domains of thermal management, information processing, and energy storage.

  6. Set-Membership Identification for Robust Control Design

    DTIC Science & Technology

    1993-04-28

    system G can be regarded as having no memory in (18) in terms of G and 0, we get of events prior to t = 1, the initial time. Roughly, this means all...algorithm in [1]. Also in our application, the size of the matrices involved is quite large and special attention should be paid to the memory ...management and algorithmic implementation; otherwise huge amounts of memory will be required to perform the optimization even for modest values of M and N

  7. Design and Implementation of a Basic Cross-Compiler and Virtual Memory Management System for the TI-59 Programmable Calculator.

    DTIC Science & Technology

    1983-06-01

    previously stated requirements to construct the framework for a software soluticn. It is during this phase of design that lany cf the most critical...the linker would have to be deferred until the compiler was formalized and ir the implementation phase of design. The second problem involved...memory liait was encountered. At this point a segmentation occurred. The memory limits were reset and the combining process continued until another

  8. Effects of medicinal plants on Alzheimer's disease and memory deficits

    PubMed Central

    Akram, Muhammad; Nawaz, Allah

    2017-01-01

    Alzheimer's disease is an age-related neurodegenerative disorder characterized by memory deficits. Various studies have been carried out to find therapeutic approaches for Alzheimer's disease. However, the proper treatment option is still not available. There is no cure for Alzheimer's disease, but symptomatic treatment may improve the memory and other dementia related problems. Traditional medicine is practiced worldwide as memory enhancer since ancient times. Natural therapy including herbs and medicinal plants has been used in the treatment of memory deficits such as dementia, amnesia, as well as Alzheimer's disease since a long time. Medicinal plants have been used in different systems of medicine, particularly Unani system of medicines and exhibited their powerful roles in the management and cure of memory disorders. Most of herbs and plants have been chemically evaluated and their efficacy has also been proven in clinical trials. However, the underlying mechanisms of actions are still on the way. In this paper, we have reviewed the role of different medicinal plants that play an important role in the treatment of Alzheimer's disease and memory deficits using conventional herbal therapy. PMID:28553349

  9. Managing Chemotherapy Side Effects: Memory Changes

    MedlinePlus

    ... C ancer I nstitute Managing Chemotherapy Side Effects Memory Changes What is causing these changes? Your doctor ... thinking or remembering things Managing Chemotherapy Side Effects: Memory Changes Get help to remember things. Write down ...

  10. Acute transient cognitive dysfunction and acute brain injury induced by systemic inflammation occur by dissociable IL-1-dependent mechanisms.

    PubMed

    Skelly, Donal T; Griffin, Éadaoin W; Murray, Carol L; Harney, Sarah; O'Boyle, Conor; Hennessy, Edel; Dansereau, Marc-Andre; Nazmi, Arshed; Tortorelli, Lucas; Rawlins, J Nicholas; Bannerman, David M; Cunningham, Colm

    2018-06-06

    Systemic inflammation can impair cognition with relevance to dementia, delirium and post-operative cognitive dysfunction. Episodes of delirium also contribute to rates of long-term cognitive decline, implying that these acute events induce injury. Whether systemic inflammation-induced acute dysfunction and acute brain injury occur by overlapping or discrete mechanisms remains unexplored. Here we show that systemic inflammation, induced by bacterial LPS, produces both working-memory deficits and acute brain injury in the degenerating brain and that these occur by dissociable IL-1-dependent processes. In normal C57BL/6 mice, LPS (100 µg/kg) did not affect working memory but impaired long-term memory consoliodation. However prior hippocampal synaptic loss left mice selectively vulnerable to LPS-induced working memory deficits. Systemically administered IL-1 receptor antagonist (IL-1RA) was protective against, and systemic IL-1β replicated, these working memory deficits. Dexamethasone abolished systemic cytokine synthesis and was protective against working memory deficits, without blocking brain IL-1β synthesis. Direct application of IL-1β to ex vivo hippocampal slices induced non-synaptic depolarisation and irrevesible loss of membrane potential in CA1 neurons from diseased animals and systemic LPS increased apoptosis in the degenerating brain, in an IL-1RI -/- -dependent fashion. The data suggest that LPS induces working memory dysfunction via circulating IL-1β but direct hippocampal action of IL-1β causes neuronal dysfunction and may drive neuronal death. The data suggest that acute systemic inflammation produces both reversible cognitive deficits, resembling delirium, and acute brain injury contributing to long-term cognitive impairment but that these events are mechanistically dissociable. These data have significant implications for management of cognitive dysfunction during acute illness.

  11. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    PubMed Central

    Vega, Julio; Perdices, Eduardo; Cañas, José M.

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  12. [Neurodynamic Bases of Imitation Learning and Episodic Memory].

    PubMed

    Tsukerman, V D

    2016-01-01

    In this review, three essentially important processes in development of cognitive behavior are considered: knowledge of a spatial environment by means of physical activity, coding and a call of an existential context of episodic memory and imitation learning based on the mirror neural mechanism. The data show that the parietal and frontal system of learning by imitation, allows the developing organism to seize skills of management and motive synergies in perisomatic space, to understand intentions and the purposes of observed actions of other individuals. At the same time a widely distributed parietal and frontal and entorhinal-hippocampal system mediates spatial knowledge and the solution of the navigation tasks important for creation of an existential context of episodic memory.

  13. Performance analysis and kernel size study of the Lynx real-time operating system

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.

    1993-01-01

    This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.

  14. Design of a QoS-controlled ATM-based communications system in chorus

    NASA Astrophysics Data System (ADS)

    Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug

    1995-05-01

    We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.

  15. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  16. Network resiliency through memory health monitoring and proactive management

    DOEpatents

    Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-11-21

    A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.

  17. Scalable PGAS Metadata Management on Extreme Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less

  18. 75 FR 57375 - Establishment of Class E Airspace; Toledo, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-21

    ... System (GPS) Standard Instrument Approach Procedure (SIAP) at Ed Carlson Memorial Field-South Lewis County Airport. This will improve the safety and management of Instrument Flight Rules (IFR) operations... is necessary for the safety and management of IFR operations. The FAA has determined this regulation...

  19. Successful life outcome and management of real-world memory demands despite profound anterograde amnesia

    PubMed Central

    Duff, Melissa C.; Wszalek, Tracey; Tranel, Daniel; Cohen, Neal J.

    2010-01-01

    We describe the case of Angie, a 50 year-old woman with profound amnesia (General Memory Index = 49, Full Scale IQ = 126) following a closed head injury in 1985. This case is unique in comparison to other cases reported in the literature in that, despite the severity of her amnesia, she has developed remarkable real-world life abilities, shows impressive self awareness and insight into the impairment and sparing of various functional memory abilities, and exhibits ongoing maturation of her identity and sense of self following amnesia. The case provides insights into the interaction of different memory and cognitive systems in handling real-world memory demands, and has implications for rehabilitation and for successful life outcome after amnesia. PMID:18608659

  20. CoNNeCT Baseband Processor Module

    NASA Technical Reports Server (NTRS)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  1. System Safety Management Lessons Learned

    DTIC Science & Technology

    1989-05-01

    DISCLAIMER This report was prepared as an account of work sponsored by an agency of the United States Government . Neither the United States Government nor... Government or any agency thereof, or Battelle Memorial Institute. The views and opinions of authors expressed herein do not necessarily state or reflect...those of the United States Government or any agency thereof. PACIFIC NORTHWEST LABORATORY operated by BATTELLE MEMORIAL INSTITUTE for the UNITED

  2. Today's CIO: catalyst for managed care change.

    PubMed

    Sanchez, P

    1997-05-01

    As the impact of managed care increases and capitation becomes all pervasive, healthcare providers' attention to cost control will intensify. For integrated delivery networks (IDNs) to be competitive, today's CIO must leverage managed care as a catalyst for change, and use a sophisticated information system toolset as the means to an integrated end. An area many CIOs target for fast results and maximum cost savings in resource management. This article reviews how Dick Escue, chief information officer at Baptist Memorial Health Care Corporation (Memphis, TN), uses electronic information management systems to integrate and conserve the resources of Baptist's widespread healthcare organization.

  3. A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong

    Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memory technologies. To highlight theirmore » similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.« less

  4. Co-design of application software and NAND flash memory in solid-state drive for relational database storage system

    NASA Astrophysics Data System (ADS)

    Miyaji, Kousuke; Sun, Chao; Soga, Ayumi; Takeuchi, Ken

    2014-01-01

    A relational database management system (RDBMS) is designed based on NAND flash solid-state drive (SSD) for storage. By vertically integrating the storage engine (SE) and the flash translation layer (FTL), system performance is maximized and the internal SSD overhead is minimized. The proposed RDBMS SE utilizes physical information about the NAND flash memory which is supplied from the FTL. The query operation is also optimized for SSD. By these treatments, page-copy-less garbage collection is achieved and data fragmentation in the NAND flash memory is suppressed. As a result, RDBMS performance increases by 3.8 times, power consumption of SSD decreases by 46% and SSD life time is increased by 61%. The effectiveness of the proposed scheme increases with larger erase block sizes, which matches the future scaling trend of three-dimensional (3D-) NAND flash memories. The preferable row data size of the proposed scheme is below 500 byte for 16 kbyte page size.

  5. Memory management in genome-wide association studies

    PubMed Central

    2009-01-01

    Genome-wide association is a powerful tool for the identification of genes that underlie common diseases. Genome-wide association studies generate billions of genotypes and pose significant computational challenges for most users including limited computer memory. We applied a recently developed memory management tool to two analyses of North American Rheumatoid Arthritis Consortium studies and measured the performance in terms of central processing unit and memory usage. We conclude that our memory management approach is simple, efficient, and effective for genome-wide association studies. PMID:20018047

  6. Prospective memory in schizophrenia: relationship to medication management skills, neurocognition, and symptoms in individuals with schizophrenia.

    PubMed

    Raskin, Sarah A; Maye, Jacqueline; Rogers, Alexandra; Correll, David; Zamroziewicz, Marta; Kurtz, Matthew

    2014-05-01

    Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. In this Study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition, and symptoms. Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs.

  7. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2011-01-11

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  8. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-02-21

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  9. NRL Fact Book 2010

    DTIC Science & Technology

    2010-01-01

    service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services

  10. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  11. What are the differences between long-term, short-term, and working memory?

    PubMed

    Cowan, Nelson

    2008-01-01

    In the recent literature there has been considerable confusion about the three types of memory: long-term, short-term, and working memory. This chapter strives to reduce that confusion and makes up-to-date assessments of these types of memory. Long- and short-term memory could differ in two fundamental ways, with only short-term memory demonstrating (1) temporal decay and (2) chunk capacity limits. Both properties of short-term memory are still controversial but the current literature is rather encouraging regarding the existence of both decay and capacity limits. Working memory has been conceived and defined in three different, slightly discrepant ways: as short-term memory applied to cognitive tasks, as a multi-component system that holds and manipulates information in short-term memory, and as the use of attention to manage short-term memory. Regardless of the definition, there are some measures of memory in the short term that seem routine and do not correlate well with cognitive aptitudes and other measures (those usually identified with the term "working memory") that seem more attention demanding and do correlate well with these aptitudes. The evidence is evaluated and placed within a theoretical framework depicted in Fig. 1.

  12. Limited capacity of working memory in unihemispheric random walks implies conceivable slow dispersal.

    PubMed

    Wei, Kun; Zhong, Suchuan

    2017-08-01

    Phenomenologically inspired by dolphins' unihemispheric sleep, we introduce a minimal model for random walks with physiological memory. The physiological memory consists of long-term memory which includes unconscious implicit memory and conscious explicit memory, and working memory which serves as a multi-component system for integrating, manipulating and managing short-term storage. The model assumes that the sleeping state allows retrievals of episodic objects merely from the episodic buffer where these memory objects are invoked corresponding to the ambient objects and are thus object-oriented, together with intermittent but increasing use of implicit memory in which decisions are unconsciously picked up from historical time series. The process of memory decay and forgetting is constructed in the episodic buffer. The walker's risk attitude, as a product of physiological heuristics according to the performance of objected-oriented decisions, is imposed on implicit memory. The analytical results of unihemispheric random walks with the mixture of object-oriented and time-oriented memory, as well as the long-time behavior which tends to the use of implicit memory, are provided, indicating the common sense that a conservative risk attitude is inclinable to slow movement.

  13. Drosophila SLC22A transporter is a memory suppressor gene that influences cholinergic neurotransmission to the mushroom bodies

    PubMed Central

    Gai, Yunchao; Liu, Ze; Cervantes-Sandoval, Isaac; Davis, Ronald L.

    2016-01-01

    SUMMARY The mechanisms that constrain memory formation are of special interest because they provide insights into the brain’s memory management systems and potential avenues for correcting cognitive disorders. RNAi knockdown in the Drosophila mushroom body neurons (MBn) of a newly discovered memory suppressor gene, Solute Carrier DmSLC22A, a member of the organic cation transporter family, enhances olfactory memory expression, while overexpression inhibits it. The protein localizes to the dendrites of the MBn, surrounding the presynaptic terminals of cholinergic afferent fibers from projection neurons (Pn). Cell-based expression assays show that this plasma membrane protein transports cholinergic compounds with the highest affinity among several in vitro substrates. Feeding flies choline or inhibiting acetylcholinesterase in Pn enhances memory; an effect blocked by overexpression of the transporter in the MBn. The data argue that DmSLC22A is a memory suppressor protein that limits memory formation by helping to terminate cholinergic neurotransmission at the Pn:MBn synapse. PMID:27146270

  14. Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.

    In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less

  15. Naval Research Laboratory Fact Book 2012

    DTIC Science & Technology

    2012-11-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion

  16. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  17. Prospective memory in schizophrenia: Relationship to medication management skills, neurocognition and symptoms in individuals with schizophrenia

    PubMed Central

    Raskin, S.; Maye, J.; Rogers, A.; Correll, D.; Zamroziewicz, M.; Kurtz, M.

    2014-01-01

    Objective Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. Methods In this study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition and symptoms. Results Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. Conclusions This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs. PMID:24188118

  18. What are the differences between long-term, short-term, and working memory?

    PubMed Central

    Cowan, Nelson

    2008-01-01

    In the recent literature there has been considerable confusion about the three types of memory: long-term, short-term, and working memory. This chapter strives to reduce that confusion and makes up-to-date assessments of these types of memory. Long- and short-term memory could differ in two fundamental ways, with only short-term memory demonstrating (1) temporal decay and (2) chunk capacity limits. Both properties of short-term memory are still controversial but the current literature is rather encouraging regarding the existence of both decay and capacity limits. Working memory has been conceived and defined in three different, slightly discrepant ways: as short-term memory applied to cognitive tasks, as a multi-component system that holds and manipulates information in short-term memory, and as the use of attention to manage short-term memory. Regardless of the definition, there are some measures of memory in the short term that seem routine and do not correlate well with cognitive aptitudes and other measures (those usually identified with the term “working memory”) that seem more attention demanding and do correlate well with these aptitudes. The evidence is evaluated and placed within a theoretical framework depicted in Fig. 1. PMID:18394484

  19. Radiation-Hardened Solid-State Drive

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.

    2010-01-01

    A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.

  20. Ensuring a C2 Level of Trust and Interoperability in a Networked Windows NT Environment

    DTIC Science & Technology

    1996-09-01

    addition, it should be noted that the device drivers, microkernel , memory manager, and Hardware Abstraction Layer are all hardware dependent. a. The...Executive The executive is further divided into three conceptual layers which are referred to as-the Hardware Abstraction Layer (HAL), the Microkernel , and...Subsystem Executive Subsystems Manager I/O Manager Cache Manager File Systems Microkernel Device Driver Hardware Abstraction Layer F HARDWARE Figure 3

  1. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  2. A self-defining hierarchical data system

    NASA Technical Reports Server (NTRS)

    Bailey, J.

    1992-01-01

    The Self-Defining Data System (SDS) is a system which allows the creation of self-defining hierarchical data structures in a form which allows the data to be moved between different machine architectures. Because the structures are self-defining they can be used for communication between independent modules in a distributed system. Unlike disk-based hierarchical data systems such as Starlink's HDS, SDS works entirely in memory and is very fast. Data structures are created and manipulated as internal dynamic structures in memory managed by SDS itself. A structure may then be exported into a caller supplied memory buffer in a defined external format. This structure can be written as a file or sent as a message to another machine. It remains static in structure until it is reimported into SDS. SDS is written in portable C and has been run on a number of different machine architectures. Structures are portable between machines with SDS looking after conversion of byte order, floating point format, and alignment. A Fortran callable version is also available for some machines.

  3. NRL Fact Book

    DTIC Science & Technology

    2008-01-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR

  4. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  5. Simplifying and speeding the management of intra-node cache coherence

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  6. Managing coherence via put/get windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A; Chen, Dong; Coteus, Paul W

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less

  7. A History-based Estimation for LHCb job requirements

    NASA Astrophysics Data System (ADS)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  8. Compilation of Abstracts of Theses Submitted by Candidates for Degrees.

    DTIC Science & Technology

    1984-06-01

    Management System for the TI - 59 Programmable Calculator Kersh, T. B. Signal Processor Interface 65 CPT, USA Simulation of the AN/SPY-lA Radar...DESIGN AND IMPLEMENTATION OF A BASIC CROSS-COMPILER AND VIRTUAL MEMORY MANAGEMENT SYSTEM FOR THE TI - 59 PROGRAMMABLE CALCULATOR Mark R. Kindl Captain...Academy, 1974 The instruction set of the TI - 59 Programmable Calculator bears a close similarity to that of an assembler. Though most of the calculator

  9. DOD Weapon Systems Software Management Study, Appendix B. Shipborne Systems

    DTIC Science & Technology

    1975-06-01

    program management, from Inception to development maintenance, 2. Detailed documentation requirements, 3. Standard high -level language development (CS-1...the Guided Missile School (GMS) at Dam Neck. The APL Land-Based Test Site (LETS) consisted of a Mk 152 digital fire control computer, SPG-55B radar...instruction and data segments are respectively placed in low and high core addresses to take advantage of UYK-7 memory accessing time savings. UYK-7

  10. Don’t make cache too complex: A simple probability-based cache management scheme for SSDs

    PubMed Central

    Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897

  11. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    PubMed

    Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  12. A microprocessor card software server to support the Quebec health microprocessor card project.

    PubMed

    Durant, P; Bérubé, J; Lavoie, G; Gamache, A; Ardouin, P; Papillon, M J; Fortin, J P

    1995-01-01

    The Quebec Health Smart Card Project is advocating the use of a memory card software server[1] (SCAM) to implement a portable medical record (PMR) on a smart card. The PMR is viewed as an object that can be manipulated by SCAM's services. In fact, we can talk about a pseudo-object-oriented approach. This software architecture provides a flexible and evolutive way to manage and optimize the PMR. SCAM is a generic software server; it can manage smart cards as well as optical (laser) cards or other types of memory cards. But, in the specific case of the Quebec Health Card Project, SCAM is used to provide services between physicians' or pharmacists' software and IBM smart card technology. We propose to expose the concepts and techniques used to provide a generic environment to deal with smart cards (and more generally with memory cards), to obtain a dynamic an evolutive PMR, to raise the system global security level and the data integrity, to optimize significantly the management of the PMR, and to provide statistic information about the use of the PMR.

  13. Temperature and leakage aware techniques to improve cache reliability

    NASA Astrophysics Data System (ADS)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data stored in the cache to reduce power consumption. The initial work done on this subject focuses on the type of data that increases leakage consumption and ways to manage without impacting the performance of the microprocessor. The second phase of the project focuses on managing the data storage in different blocks of the cache to smooth the leakage power as well as dynamic power consumption. The last technique is a voltage controlled cache to reduce the leakage consumption of the cache while in execution and even in idle state. Two blocks of the 4-way set associative cache go through a voltage regulator before getting to the voltage well, and the other two are directly connected to the voltage well. The idea behind this technique is to use the replacement algorithm information to increase or decrease voltage of the two blocks depending on the need of the information stored on them.

  14. Multi-core processing and scheduling performance in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J. M.; Evans, D.; Foulkes, S.

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less

  15. Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.

  16. A Computerized Interactive Vocabulary Development System for Advanced Learners.

    ERIC Educational Resources Information Center

    Kukulska-Hulme, Agnes

    1988-01-01

    Argues that the process of recording newly encountered vocabulary items in a typical language learning situation can be improved through a computerized system of vocabulary storage based on database management software that improves the discovery and recording of meaning, subsequent retrieval of items for productive use, and memory retention.…

  17. Surveillance and Outbreak Response Management System (SORMAS) to support the control of the Ebola virus disease outbreak in West Africa.

    PubMed

    Fähnrich, C; Denecke, K; Adeoye, O O; Benzler, J; Claus, H; Kirchner, G; Mall, S; Richter, R; Schapranow, M P; Schwarz, N; Tom-Aba, D; Uflacker, M; Poggensee, G; Krause, G

    2015-03-26

    In the context of controlling the current outbreak of Ebola virus disease (EVD), the World Health Organization claimed that 'critical determinant of epidemic size appears to be the speed of implementation of rigorous control measures', i.e. immediate follow-up of contact persons during 21 days after exposure, isolation and treatment of cases, decontamination, and safe burials. We developed the Surveillance and Outbreak Response Management System (SORMAS) to improve efficiency and timeliness of these measures. We used the Design Thinking methodology to systematically analyse experiences from field workers and the Ebola Emergency Operations Centre (EOC) after successful control of the EVD outbreak in Nigeria. We developed a process model with seven personas representing the procedures of EVD outbreak control. The SORMAS system architecture combines latest In-Memory Database (IMDB) technology via SAP HANA (in-memory, relational database management system), enabling interactive data analyses, and established SAP cloud tools, such as SAP Afaria (a mobile device management software). The user interface consists of specific front-ends for smartphones and tablet devices, which are independent from physical configurations. SORMAS allows real-time, bidirectional information exchange between field workers and the EOC, ensures supervision of contact follow-up, automated status reports, and GPS tracking. SORMAS may become a platform for outbreak management and improved routine surveillance of any infectious disease. Furthermore, the SORMAS process model may serve as framework for EVD outbreak modeling.

  18. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    NASA Astrophysics Data System (ADS)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  19. Augmented burst-error correction for UNICON laser memory. [digital memory

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1974-01-01

    A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.

  20. Cognitive Rehabilitation of Episodic Memory Disorders: From Theory to Practice

    PubMed Central

    Ptak, Radek; der Linden, Martial Van; Schnider, Armin

    2010-01-01

    Memory disorders are among the most frequent and most debilitating cognitive impairments following acquired brain damage. Cognitive remediation strategies attempt to restore lost memory capacity, provide compensatory techniques or teach the use of external memory aids. Memory rehabilitation has strongly been influenced by memory theory, and the interaction between both has stimulated the development of techniques such as spaced retrieval, vanishing cues or errorless learning. These techniques partly rely on implicit memory and therefore enable even patients with dense amnesia to acquire new information. However, knowledge acquired in this way is often strongly domain-specific and inflexible. In addition, individual patients with amnesia respond differently to distinct interventions. The factors underlying these differences have not yet been identified. Behavioral management of memory failures therefore often relies on a careful description of environmental factors and measurement of associated behavioral disorders such as unawareness of memory failures. The current evidence suggests that patients with less severe disorders benefit from self-management techniques and mnemonics whereas rehabilitation of severely amnesic patients should focus on behavior management, the transmission of domain-specific knowledge through implicit memory processes and the compensation for memory deficits with memory aids. PMID:20700383

  1. X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Braga, P.; Henriques, L.; Zulianello, M.

    2008-08-01

    In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.

  2. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  3. A lightweight sensor network management system design

    USGS Publications Warehouse

    Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.

    2008-01-01

    In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.

  4. From network heterogeneities to familiarity detection and hippocampal memory management

    PubMed Central

    Wang, Jane X.; Poe, Gina; Zochowski, Michal

    2009-01-01

    Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453

  5. Formal mechanization of device interactions with a process algebra

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, Karl; Cohen, Gerald C.

    1992-01-01

    The principle emphasis is to develop a methodology to formally verify correct synchronization communication of devices in a composed hardware system. Previous system integration efforts have focused on vertical integration of one layer on top of another. This task examines 'horizontal' integration of peer devices. To formally reason about communication, we mechanize a process algebra in the Higher Order Logic (HOL) theorem proving system. Using this formalization we show how four types of device interactions can be represented and verified to behave as specified. The report also describes the specification of a system consisting of an AVM-1 microprocessor and a memory management unit which were verified in previous work. A proof of correct communication is presented, and the extensions to the system specification to add a direct memory device are discussed.

  6. Makalu: fast recoverable allocation of non-volatile memory

    DOE PAGES

    Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.

    2016-10-19

    Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less

  7. Makalu: fast recoverable allocation of non-volatile memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.

    Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less

  8. Primary Care Collaborative Memory Clinics: Building Capacity for Optimized Dementia Care.

    PubMed

    Lee, Linda; Hillier, Loretta M; Molnar, Frank; Borrie, Michael J

    2017-01-01

    Increasingly, primary care collaborative memory clinics (PCCMCs) are being established to build capacity for person-centred dementia care. This paper reflects on the significance of PCCMCs within the system of care for older adults, supported with data from ongoing evaluation studies. Results highlight timelier access to assessment with a high proportion of patients being managed in primary care within a person-centred approach to care. Enhancing primary care capacity for dementia care with interprofessional and collaborative care will strengthen the system's ability to respond to increasing demands for service and mitigate the growth of wait times to access geriatric specialist assessment.

  9. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  10. Semantic and episodic memory in children with temporal lobe epilepsy: do they relate to literacy skills?

    PubMed

    Lah, Suncica; Smith, Mary Lou

    2014-01-01

    Children with temporal lobe epilepsy are at risk for deficits in new learning (episodic memory) and literacy skills. Semantic memory deficits and double dissociations between episodic and semantic memory have recently been found in this patient population. In the current study we investigate whether impairments of these 2 distinct memory systems relate to literacy skills. 57 children with unilateral temporal lobe epilepsy completed tests of verbal memory (episodic and semantic) and literacy skills (reading and spelling accuracy, and reading comprehension). For the entire group, semantic memory explained over 30% of variance in each of the literacy domains. Episodic memory explained a significant, but rather small proportion (< 10%) of variance in reading and spelling accuracy, but not in reading comprehension. Moreover, when children with opposite patterns of specific memory impairments (intact semantic/impaired episodic, intact episodic/impaired semantic) were compared, significant reductions in literacy skills were evident only in children with semantic memory impairments, but not in children with episodic memory impairments relative to the norms and to children with temporal lobe epilepsy who had intact memory. Our study provides the first evidence for differential relations between episodic and semantic memory impairments and literacy skills in children with temporal lobe epilepsy. As such, it highlights the urgent need to consider semantic memory deficits in management of children with temporal lobe epilepsy and undertake further research into the nature of reading difficulties of children with semantic memory impairments.

  11. Data Telemetry and Acquisition System for Acoustic Signal Processing Investigations.

    DTIC Science & Technology

    1996-02-20

    were VME- based computer systems operating under the VxWorks real - time operating system . Each system shared a common hardware and software... real - time operating system . It interfaces to the Berg PCM Decommutator board, which searches for the embedded synchronization word in the data and re...software were built on top of this architecture. The multi-tasking, message queue and memory management facilities of the VxWorks real - time operating system are

  12. Regional information guidance system based on hypermedia concept

    NASA Astrophysics Data System (ADS)

    Matoba, Hiroshi; Hara, Yoshinori; Kasahara, Yutako

    1990-08-01

    A regional information guidance system has been developed on an image workstation. Two main features of this system are hypermedia data structure and friendly visual interface realized by the full-color frame memory system. As the hypermedia data structure manages regional information such as maps, pictures and explanations of points of interest, users can retrieve those information one by one, next to next according to their interest change. For example, users can retrieve explanation of a picture through the link between pictures and text explanations. Users can also traverse from one document to another by using keywords as cross reference indices. The second feature is to utilize a full-color, high resolution and wide space frame memory for visual interface design. This frame memory system enables real-time operation of image data and natural scene representation. The system also provides half tone representing function which enables fade-in/out presentations. This fade-in/out functions used in displaying and erasing menu and image data, makes visual interface soft for human eyes. The system we have developed is a typical example of multimedia applications. We expect the image workstation will play an important role as a platform for multimedia applications.

  13. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig Michael; Verzi, Stephen Joseph

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less

  14. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  15. Robust relationship between reading span and speech recognition in noise

    PubMed Central

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360

  16. Robust relationship between reading span and speech recognition in noise.

    PubMed

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition.

  17. The Fluke Security Project

    DTIC Science & Technology

    2000-04-01

    be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling

  18. Apparatus and Method for Low-Temperature Training of Shape Memory Alloys

    NASA Technical Reports Server (NTRS)

    Swanger, A. M.; Fesmire, J. E.; Trigwell, S.; Gibson, T. L.; Williams, M. K.; Benafan, O.

    2015-01-01

    An apparatus and method for the low-temperature thermo-mechanical training of shape memory alloys (SMA) has been developed. The experimental SMA materials are being evaluated as prototypes for applicability in novel thermal management systems for future cryogenic applications. Alloys providing two-way actuation at cryogenic temperatures are the chief target. The mechanical training regimen was focused on the controlled movement of rectangular strips, with S-bend configurations, at temperatures as low as 30 K. The custom holding fixture included temperature sensors and a low heat-leak linear actuator with a magnetic coupling. The fixture was mounted to a Gifford-McMahon cryocooler providing up to 25 W of cooling power at 20 K and housed within a custom vacuum chamber. Operations included both training cycles and verification of shape memory movement. The system design and operation are discussed. Results of the training for select prototype alloys are presented.

  19. Apparatus and method for low-temperature training of shape memory alloys

    NASA Astrophysics Data System (ADS)

    Swanger, A. M.; Fesmire, J. E.; Trigwell, S.; Gibson, T. L.; Williams, M. K.; Benafan, O.

    2015-12-01

    An apparatus and method for the low-temperature thermo-mechanical training of shape memory alloys (SMA) has been developed. The experimental SMA materials are being evaluated as prototypes for applicability in novel thermal management systems for future cryogenic applications. Alloys providing two-way actuation at cryogenic temperatures are the chief target. The mechanical training regimen was focused on the controlled movement of rectangular strips, with S-bend configurations, at temperatures as low as 30 K. The custom holding fixture included temperature sensors and a low heat-leak linear actuator with a magnetic coupling. The fixture was mounted to a Gifford-McMahon cryocooler providing up to 25 W of cooling power at 20 K and housed within a custom vacuum chamber. Operations included both training cycles and verification of shape memory movement. The system design and operation are discussed. Results of the training for select prototype alloys are presented.

  20. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    PubMed

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  1. A review of shape memory material’s applications in the offshore oil and gas industry

    NASA Astrophysics Data System (ADS)

    Patil, Devendra; Song, Gangbing

    2017-09-01

    The continuously increasing demand for oil and gas and the depleting number of new large reservoir discoveries have made it necessary for the oil and gas industry to investigate and design new, improved technologies that unlock new sources of energy and squeeze more from existing resources. Shape memory materials (SMM), with their remarkable properties such as the shape memory effect (SME), corrosion resistance, and superelasticity have shown great potential to meet these demands by significantly improving the functionality and durability of offshore systems. Shape memory alloy (SMA) and shape memory polymer (SMP) are two types of most commonly used SMM’s and are ideally suited for use over a range of robust engineering applications found within the oil and gas industry, such as deepwater actuators, valves, underwater connectors, seals, self-torqueing fasteners and sand management. The potential high strain and high force output of the SME of SMA can be harnessed to create a lightweight, solid state alternative to conventional hydraulic, pneumatic or motor based actuator systems. The phase transformation property enables the SMA to withstand erosive stresses, which is useful for minimizing the effect of erosion often experienced by downhole devices. The superelasticity of the SMA provides good energy dissipation, and can overcome the various defects and limitations suffered by conventional passive damping methods. The higher strain recovery during SME makes SMP ideal for developments of packers and sand management in downhole. The increasing number of SMM related research papers and patents from oil and gas industry indicate the growing research interest of the industry to implement SMM in offshore applications. This paper reviews the recent developments and applications of SMM in the offshore oil and gas industry.

  2. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  3. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Keren

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less

  4. A Programmer-Oriented Approach to Safe Concurrency

    DTIC Science & Technology

    2003-05-01

    and leaving a synchronized block additionally has effects on the management of memory values in the JMM. The practical outcome of these effects is...object-oriented effects system; (3) analysis to track the association of locks with regions, (4) policy descriptions for allowable method...Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4 An Object-Oriented Effects System 45 4.1 Regions Identify State

  5. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  6. Relaxation-Induced Memory Effect of LiFePO4 Electrodes in Li-Ion Batteries.

    PubMed

    Jia, Jianfeng; Tan, Chuhao; Liu, Mengchuang; Li, De; Chen, Yong

    2017-07-26

    In Li-ion batteries, memory effect has been found in several commercial two-phase materials as a voltage bump and a step in the (dis)charging plateau, which delays the two-phase transition and influences the estimation of the state of charge. Although memory effect has been first discovered in olivine LiFePO 4 , the origination and dependence are still not clear and are critical for regulating the memory effect of LiFePO 4 . Herein, LiFePO 4 has been synthesized by a home-built spray drying instrument, of which the memory effect has been investigated in Li-ion batteries. For as-synthesized LiFePO 4 , the memory effect is significantly dependent on the relaxation time after phase transition. Besides, the voltage bump of memory effect is actually a delayed voltage overshooting that is overlaid at the edge of stepped (dis)charging plateau. Furthermore, we studied the kinetics of LiFePO 4 electrode with electrochemical impedance spectroscopy (EIS), which shows that the memory effect is related to the electrochemical kinetics. Thereby, the underlying mechanism has been revealed in memory effect, which would guide us to optimize two-phase electrode materials and improve Li-ion battery management systems.

  7. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  8. The Cheetah data management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, P.F.; Word, G.B.

    1992-09-01

    Cheetah is a data management system based on the C programming language, with support for other languages. Its main goal is to transfer data between memory and I/O steams in a general way. The streams are either associated with disk files or are network data stems. Cheetah provides optional convenience functions to assist in the management of C structures. Cheetah steams are self-describing so that general purpose applications can fully understand an incoming steam. This information can be used to display the data in an incoming steam to the user of an interactive general application, complete with variable names andmore » optional comments.« less

  9. CERES AuTomAted job Loading SYSTem (CATALYST): An automated workflow manager for satellite data production

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Hillyer, T. N.; Wilkins, J.

    2012-12-01

    The CERES Science Team integrates data from 5 CERES instruments onboard the Terra, Aqua and NPP missions. The processing chain fuses CERES observations with data from 19 other unique sources. The addition of CERES Flight Model 5 (FM5) onboard NPP, coupled with ground processing system upgrades further emphasizes the need for an automated job-submission utility to manage multiple processing streams concurrently. The operator-driven, legacy-processing approach relied on manually staging data from magnetic tape to limited spinning disk attached to a shared memory architecture system. The migration of CERES production code to a distributed, cluster computing environment with approximately one petabyte of spinning disk containing all precursor input data products facilitates the development of a CERES-specific, automated workflow manager. In the cluster environment, I/O is the primary system resource in contention across jobs. Therefore, system load can be maximized with a throttling workload manager. This poster discusses a Java and Perl implementation of an automated job management tool tailored for CERES processing.

  10. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  11. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  12. Quality Management Systems in the Clinical Laboratories in Latin America

    PubMed Central

    2015-01-01

    The implementation of management systems in accordance with standards like ISO 9001:2008 (1,2) in the clinical laboratories has conferred and added value of reliability and therefore a very significant input to patient safety. As we know the ISO 9001:2008 (1) a certification standard, and ISO 15189:2012 (2) an accreditation standard, both, at the time have generated institutional memory where they have been implemented, the transformation of culture focused on correct execution, control and following, evidence needed and the importance of register. PMID:27683495

  13. Systems and methods to control multiple peripherals with a single-peripheral application code

    DOEpatents

    Ransom, Ray M.

    2013-06-11

    Methods and apparatus are provided for enhancing the BIOS of a hardware peripheral device to manage multiple peripheral devices simultaneously without modifying the application software of the peripheral device. The apparatus comprises a logic control unit and a memory in communication with the logic control unit. The memory is partitioned into a plurality of ranges, each range comprising one or more blocks of memory, one range being associated with each instance of the peripheral application and one range being reserved for storage of a data pointer related to each peripheral application of the plurality. The logic control unit is configured to operate multiple instances of the control application by duplicating one instance of the peripheral application for each peripheral device of the plurality and partitioning a memory device into partitions comprising one or more blocks of memory, one partition being associated with each instance of the peripheral application. The method then reserves a range of memory addresses for storage of a data pointer related to each peripheral device of the plurality, and initializes each of the plurality of peripheral devices.

  14. Influence of memory effect on the state-of-charge estimation of large-format Li-ion batteries based on LiFePO4 cathode

    NASA Astrophysics Data System (ADS)

    Shi, Wei; Wang, Jiulin; Zheng, Jianming; Jiang, Jiuchun; Viswanathan, Vilayanur; Zhang, Ji-Guang

    2016-04-01

    In this work, we systematically investigated the influence of the memory effect of LiFePO4 cathodes in large-format full batteries. The electrochemical performance of the electrodes used in these batteries was also investigated separately in half-cells to reveal their intrinsic properties. We noticed that the memory effect of LiFePO4/graphite cells depends not only on the maximum state of charge reached during the memory writing process, but is also affected by the depth of discharge reached during the memory writing process. In addition, the voltage deviation in a LiFePO4/graphite full battery is more complex than in a LiFePO4/Li half-cell, especially for a large-format battery, which exhibits a significant current variation in the region near its terminals. Therefore, the memory effect should be taken into account in advanced battery management systems to further extend the long-term cycling stabilities of Li-ion batteries using LiFePO4 cathodes.

  15. Role of indigenous herbs in the management of Alzheimer's disease

    PubMed Central

    Nishteswar, K.; Joshi, Hemang; Karra, Rahul Dutt

    2014-01-01

    Ageing is a natural phenomenon and decline of physiological and structural changes are incurable in advancing years of human life. When such degenerative changes occur in the brain they may lead to dementia and other memory related conditions. The Ayurvedic classics identified the importance of higher faculties dealing with memory and introduced a separate group of drugs namely Medhya Rasayanas. Regular intake of such drugs will help to prevent the onset of degenerative changes in the brain prematurely. Ayurveda can play a useful role in the management of such geriatric conditions. The current review has been done with a view to update documented Ayurvedic therapeutic modalities for certain geriatric conditions suggested by Ayurvedic classics in the management of diseases called Vātavyādhi (nervous system disorders), which also include conditions related to memory functions. Recent studies have started validating the claims recorded in Ayurvedic texts. The pathogenesis and remedies for Vātavyādhi documented in Ayurvedic classics have been reviewed with special emphasis on disorders related to dementia. A review of recent researches on the herbs mentioned in management of vāta disorders including dementia have been done to understand their role in management of Alzheimer's disease (AD). There are many herbs of ethno-medicinal source studied experimentally for their potential in treatment of AD. A judicious combination of modern research methodology and Ayurvedic principles could go a long way in the management and care of AD which is going to be a heavy burden on the society in the future. PMID:25737604

  16. RECALL: A Management Information Retrieval System for the Wang 2200

    DTIC Science & Technology

    1976-11-01

    B-5. APPENDIX B TABLE B-l. VARIABLE DEFINITIONS Name Meaning Data base format R$(l-60) Page of blocks R1 Present block index within page RO...Present page index in memory R2 Number of blocks allowed in memory (60) R3 Last-used character index in present block R1* Number of sectors per page...5) Rl(l- 5) General CO BS(1- 15) L0 LS15 C csd- 12) Cl$(l -12) FOR cond i t ion Number of range pairs Starting record index Ending

  17. Healthcare knowledge management through building and operationalising healthcare enterprise memory.

    PubMed

    Cheah, Y N; Abidi, S S

    1999-01-01

    In this paper we suggest that the healthcare enterprise needs to be more conscious of its vast knowledge resources vis-à-vis the exploitation of knowledge management techniques to efficiently manage its knowledge. The development of healthcare enterprise memory is suggested as a solution, together with a novel approach advocating the operationalisation of healthcare enterprise memories leading to the modelling of healthcare processes for strategic planning. As an example, we present a simulation of Service Delivery Time in a hospital's OPD.

  18. A mobile system for the improvement of heart failure management: Evaluation of a prototype.

    PubMed

    Haynes, Sarah C; Kim, Katherine K

    2017-01-01

    Management of heart failure is complex, often involving interaction with multiple providers, monitoring of symptoms, and numerous medications. Employing principles of user-centered design, we developed a high- fidelity prototype of a mobile system for heart failure self-management and care coordination. Participants, including both heart failure patients and health care providers, tested the mobile system during a one-hour one-on-one session with a facilitator. The facilitator interviewed participants about the strengths and weaknesses of the prototype, necessary features, and willingness to use the technology. We performed a qualitative content analysis using the transcripts of these interviews. Fourteen distinct themes were identified in the analysis. Of these themes, integration, technology literacy, memory, and organization were the most common. Privacy was the least common theme. Our study suggests that this integration is essential for adoption of a mobile system for chronic disease management and care coordination.

  19. New data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer

    NASA Astrophysics Data System (ADS)

    Tamii, A.; Sakaguchi, H.; Takeda, H.; Yosoi, M.; Akimune, H.; Fujiwara, M.; Ogata, H.; Tanaka, M.; Togawa, H.

    1996-10-01

    This paper describes a new data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer at the Research Center for Nuclear Physics (RCNP) in Osaka, Japan. Data are acquired by a Creative Electronic Systems (CES) Starburst, which is a CAMAC auxiliary crate controller equipped with a Digital Equipment Corporation (DEC) J11 microprocessor. The data on the Starburst are transferred to a VME single-board computer. A VME reflective memory module broadcasts the data to other systems through a fiber-optic link. A data transfer rate of 2.0 Mbytes/s between VME modules has been achieved by reflective memories. This rate includes the overhead of buffer management. The overall transfer rate, however, is limited by the performance of the Starburst to about 160 Kbytes/s at maximum. In order to further improve the system performance, we developed a new readout module called the Rapid Data Transfer Module (RDTM). RDTM's transfer data from LeCroy PCOS III's or 4298's, and FERA/FERET's directly to CES 8170 High Speed Memories (HSM) in VME crates, the data transfer rate of the RDTM from PCOS III's to the HSM is about 4 Mbytes/s.

  20. The Fritz Roethlisberger Memorial Award Goes to "Using Leadered Groups in Organizational Behavior and Management Survey Courses"

    ERIC Educational Resources Information Center

    Amoroso, Lisa M.; Loyd, Denise Lewin; Hoobler, Jenny M.

    2012-01-01

    The Fritz J. Roethlisberger Memorial Award for the best article in the 2011 "Journal of Management Education" goes to Rae Andre for her article, Using Leadered Groups in Organizational Behavior and Management Survey Courses ("Journal of Management Education," Volume 35, Number 5, pp. 596-619). In keeping with Roethlisberger's legacy, this year's…

  1. Three-Dimensional Visualization with Large Data Sets: A Simulation of Spreading Cortical Depression in Human Brain

    PubMed Central

    Ertürk, Korhan Levent; Şengül, Gökhan

    2012-01-01

    We developed 3D simulation software of human organs/tissues; we developed a database to store the related data, a data management system to manage the created data, and a metadata system for the management of data. This approach provides two benefits: first of all the developed system does not require to keep the patient's/subject's medical images on the system, providing less memory usage. Besides the system also provides 3D simulation and modification options, which will help clinicians to use necessary tools for visualization and modification operations. The developed system is tested in a case study, in which a 3D human brain model is created and simulated from 2D MRI images of a human brain, and we extended the 3D model to include the spreading cortical depression (SCD) wave front, which is an electrical phoneme that is believed to cause the migraine. PMID:23258956

  2. How to Program the Principal's Office for the Computer Age.

    ERIC Educational Resources Information Center

    Frankel, Steven

    1983-01-01

    Explains why principals' offices need computers and discusses the characteristics of inexpensive personal business computers, including their operating systems, disk drives, memory, and compactness. Reviews software available for word processing, accounting, database management, and communications, and compares the Kaypro II, Morrow, and Osborne I…

  3. Global-view coefficients: a data management solution for parallel quantum Monte Carlo applications: A DATA MANAGEMENT SOLUTION FOR QMC APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Qingpeng; Dinan, James; Tirukkovalur, Sravya

    2016-01-28

    Quantum Monte Carlo (QMC) applications perform simulation with respect to an initial state of the quantum mechanical system, which is often captured by using a cubic B-spline basis. This representation is stored as a read-only table of coefficients and accesses to the table are generated at random as part of the Monte Carlo simulation. Current QMC applications, such as QWalk and QMCPACK, replicate this table at every process or node, which limits scalability because increasing the number of processors does not enable larger systems to be run. We present a partitioned global address space approach to transparently managing this datamore » using Global Arrays in a manner that allows the memory of multiple nodes to be aggregated. We develop an automated data management system that significantly reduces communication overheads, enabling new capabilities for QMC codes. Experimental results with QWalk and QMCPACK demonstrate the effectiveness of the data management system.« less

  4. Design and realization of flash translation layer in tiny embedded system

    NASA Astrophysics Data System (ADS)

    Ren, Xiaoping; Sui, Chaoya; Luo, Zhenghua; Cao, Wenji

    2018-05-01

    We design a solution of tiny embedded device NAND Flash storage system on the basis of deeply studying the characteristics of widely used NAND Flash in the embedded devices in order to adapt to the development of intelligent interconnection trend and solve the storage problem of large data volume in tiny embedded system. The hierarchical structure and function purposes of the system are introduced. The design and realization of address mapping, error correction, bad block management, wear balance, garbage collection and other algorithms in flash memory transformation layer are described in details. NAND Flash drive and management are realized on STM32 micro-controller, thereby verifying design effectiveness and feasibility.

  5. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  6. Developing a Physician Management & Leadership Program (PMLP) in Newfoundland and Labrador.

    PubMed

    Maddalena, Victor; Fleet, Lisa

    2015-01-01

    This article aims to document the process the province of Newfoundland and Labrador used to develop an innovative Physician Management and Leadership Program (PMLP). The PMLP is a collaborative initiative among Memorial University (Faculty of Medicine and Faculty of Business), the Government of Newfoundland and Labrador, and the Regional Health Authorities. As challenges facing health-care systems become more complex there is a growing need for management and leadership training for physicians. Memorial University Faculty of Medicine and the Gardiner Centre in the Faculty of Business in partnership with Regional Health Authorities and the Government of Newfoundland and Labrador identified the need for a leadership and management education program for physician leaders. A provincial needs assessment of physician leaders was conducted to identify educational needs to fill this identified gap. A Steering Committee was formed to guide the design and implementation and monitor delivery of the 10 module Physician Management and Leadership Program (PMLP). Designing management and leadership education programs to serve physicians who practice in a large, predominately rural geographic area can be challenging and requires efficient use of available resources and technology. While there are many physician management and leadership programs available in Canada and abroad, the PMLP was designed to meet the specific educational needs of physician leaders in Newfoundland and Labrador.

  7. Advanced Health Management System for the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Davidson, Matt; Stephens, John; Rodela, Chris

    2006-01-01

    Pratt & Whitney Rocketdyne, Inc., in cooperation with NASA-Marshall Space Flight Center (MSFC), has developed a new Advanced Health Management System (AHMS) controller for the Space Shuttle Main Engine (SSME) that will increase the probability of successfully placing the shuttle into the intended orbit and increase the safety of the Space Transportation System (STS) launches. The AHMS is an upgrade o the current Block II engine controller whose primary component is an improved vibration monitoring system called the Real-Time Vibration Monitoring System (RTVMS) that can effectively and reliably monitor the state of the high pressure turbomachinery and provide engine protection through a new synchronous vibration redline which enables engine shutdown if the vibration exceeds predetermined thresholds. The introduction of this system required improvements and modification to the Block II controller such as redesigning the Digital Computer Unit (DCU) memory and the Flight Accelerometer Safety Cut-Off System (FASCOS) circuitry, eliminating the existing memory retention batteries, installation of the Digital Signal Processor (DSP) technology, and installation of a High Speed Serial Interface (HSSI) with accompanying outside world connectors. Test stand hot-fire testing along with lab testing have verified successful implementation and is expected to reduce the probability of catastrophic engine failures during the shuttle ascent phase and improve safely by about 23% according to the Quantitative Risk Assessment System (QRAS), leading to a safer and more reliable SSME.

  8. Memory Compression Techniques for Network Address Management in MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yanfei; Archer, Charles J.; Blocksome, Michael

    MPI allows applications to treat processes as a logical collection of integer ranks for each MPI communicator, while internally translating these logical ranks into actual network addresses. In current MPI implementations the management and lookup of such network addresses use memory sizes that are proportional to the number of processes in each communicator. In this paper, we propose a new mechanism, called AV-Rankmap, for managing such translation. AV-Rankmap takes advantage of logical patterns in rank-address mapping that most applications naturally tend to have, and it exploits the fact that some parts of network address structures are naturally more performance criticalmore » than others. It uses this information to compress the memory used for network address management. We demonstrate that AV-Rankmap can achieve performance similar to or better than that of other MPI implementations while using significantly less memory.« less

  9. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, Arup

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interfacemore » (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.« less

  10. Laboratory testing for the diagnosis, evaluation, and management of systemic lupus erythematosus: Still more questions for the next generations: A Tribute and Thanks and in Memory of my mentor: Henry G. Kunkel.

    PubMed

    Schur, Peter H

    2016-11-01

    This paper is a review, personal memoir, a tribute to Henry Kunkel, and a critique regarding laboratory tests used for the evaluation, diagnosis, and understanding Autoimmune Rheumatic Diseases, in particular systemic lupus erythematosus (SLE). Copyright © 2016 Elsevier Inc. All rights reserved.

  11. An object oriented extension to CLIPS

    NASA Technical Reports Server (NTRS)

    Sobkowicz, Clifford

    1990-01-01

    A presentation of software sub-system developed to augment C Language Production Systems (CLIPS) with facilities for object oriented Knowledge representation. Functions are provided to define classes, instantiate objects, access attributes, and assert object related facts. This extension is implemented via the CLIPS user function interface and does not require modification of any CLIPS code. It does rely on internal CLIPS functions for memory management and symbol representation.

  12. Evolution of hemispheric specialisation of antagonistic systems of management of the body's energy resources.

    PubMed

    Braun, Claude M J

    2007-09-01

    Excellent and rich reviews of lateralised behaviour in animals have recently been published indexing renewed interest in biological theorising about hemispheric specialisation and yielding rich theory. The present review proposes a new account of the evolution of hemispheric specialisation, a primitive system of "management of the body's energy resources". This model is distinct from traditionally evoked cognitive science categories such as verbal/spatial, analytic/holistic, etc., or the current dominant neuroethological model proposing that the key is approach/avoidance behaviour. Specifically, I show that autonomic, immune, psychomotor, motivational, perceptual, and memory systems are similarly and coherently specialised in the brain hemispheres in rodents and man. This energy resource management model, extended to human neuropsychology, is termed here the "psychic tonus" model of hemispheric specialisation.

  13. Collective memory in primate conflict implied by temporal scaling collapse.

    PubMed

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  14. Optical mass memory system (AMM-13). AMM-13 system segment specification

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The performance, design, development, and test requirements for an optical mass data storage and retrieval system prototype (AMM-13) are established. This system interfaces to other system segments of the NASA End-to-End Data System via the Data Base Management System segment and is designed to have a storage capacity of 10 to the 13th power bits (10 to the 12th power bits on line). The major functions of the system include control, input and output, recording of ingested data, fiche processing/replication and storage and retrieval.

  15. Common chromosomal fragile sites (CFS) may be involved in normal and traumatic cognitive stress memory consolidation and altered nervous system immunity.

    PubMed

    Gericke, G S

    2010-05-01

    Previous reports of specific patterns of increased fragility at common chromosomal fragile sites (CFS) found in association with certain neurobehavioural disorders did not attract attention at the time due to a shift towards molecular approaches to delineate neuropsychiatric disorder candidate genes. Links with miRNA, altered methylation and the origin of copy number variation indicate that CFS region characteristics may be part of chromatinomic mechanisms that are increasingly linked with neuroplasticity and memory. Current reports of large-scale double-stranded DNA breaks in differentiating neurons and evidence of ongoing DNA demethylation of specific gene promoters in adult hippocampus may shed new light on the dynamic epigenetic changes that are increasingly appreciated as contributing to long-term memory consolidation. The expression of immune recombination activating genes in key stress-induced memory regions suggests the adoption by the brain of this ancient pattern recognition and memory system to establish a structural basis for long-term memory through controlled chromosomal breakage at highly specific genomic regions. It is furthermore considered that these mechanisms for management of epigenetic information related to stress memory could be linked, in some instances, with the transfer of the somatically acquired information to the germline. Here, rearranged sequences can be subjected to further selection and possible eventual retrotranscription to become part of the more stable coding machinery if proven to be crucial for survival and reproduction. While linkage of cognitive memory with stress and fear circuitry and memory establishment through structural DNA modification is proposed as a normal process, inappropriate activation of immune-like genomic rearrangement processes through traumatic stress memory may have the potential to lead to undesirable activation of neuro-inflammatory processes. These theories could have a significant impact on the interpretation of risks posed by heredity and the environment and the search for neuropsychiatric candidate genes.

  16. Magellan spacecraft and memory state tracking: Lessons learned, future thoughts

    NASA Technical Reports Server (NTRS)

    Bucher, Allen W.

    1993-01-01

    Numerous studies have been dedicated to improving the two main elements of Spacecraft Mission Operations: Command and Telemetry. As a result, not much attention has been given to other tasks that can become tedious, repetitive, and error prone. One such task is Spacecraft and Memory State Tracking, the process by which the status of critical spacecraft components, parameters, and the contents of on-board memory are managed on the ground to maintain knowledge of spacecraft and memory states for future testing, anomaly investigation, and on-board memory reconstruction. The task of Spacecraft and Memory State Tracking has traditionally been a manual task allocated to Mission Operations Procedures. During nominal Mission Operations this job is tedious and error prone. Because the task is not complex and can be accomplished manually, the worth of a sophisticated software tool is often questioned. However, in the event of an anomaly which alters spacecraft components autonomously or a memory anomaly such as a corrupt memory or flight software error, an accurate ground image that can be reconstructed quickly is a priceless commodity. This study explores the process of Spacecraft and Memory State Tracking used by the Magellan Spacecraft Team highlighting its strengths as well as identifying lessons learned during the primary and extended missions, two memory anomalies, and other hardships encountered due to incomplete knowledge of spacecraft states. Ideas for future state tracking tools that require minimal user interaction and are integrated into the Ground Data System will also be discussed.

  17. Magellan spacecraft and memory state tracking: Lessons learned, future thoughts

    NASA Astrophysics Data System (ADS)

    Bucher, Allen W.

    1993-03-01

    Numerous studies have been dedicated to improving the two main elements of Spacecraft Mission Operations: Command and Telemetry. As a result, not much attention has been given to other tasks that can become tedious, repetitive, and error prone. One such task is Spacecraft and Memory State Tracking, the process by which the status of critical spacecraft components, parameters, and the contents of on-board memory are managed on the ground to maintain knowledge of spacecraft and memory states for future testing, anomaly investigation, and on-board memory reconstruction. The task of Spacecraft and Memory State Tracking has traditionally been a manual task allocated to Mission Operations Procedures. During nominal Mission Operations this job is tedious and error prone. Because the task is not complex and can be accomplished manually, the worth of a sophisticated software tool is often questioned. However, in the event of an anomaly which alters spacecraft components autonomously or a memory anomaly such as a corrupt memory or flight software error, an accurate ground image that can be reconstructed quickly is a priceless commodity. This study explores the process of Spacecraft and Memory State Tracking used by the Magellan Spacecraft Team highlighting its strengths as well as identifying lessons learned during the primary and extended missions, two memory anomalies, and other hardships encountered due to incomplete knowledge of spacecraft states. Ideas for future state tracking tools that require minimal user interaction and are integrated into the Ground Data System will also be discussed.

  18. Operating systems. [of computers

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  19. Memories for life: a review of the science and technology

    PubMed Central

    O'Hara, Kieron; Morris, Richard; Shadbolt, Nigel; Hitch, Graham J; Hall, Wendy; Beagrie, Neil

    2006-01-01

    This paper discusses scientific, social and technological aspects of memory. Recent developments in our understanding of memory processes and mechanisms, and their digital implementation, have placed the encoding, storage, management and retrieval of information at the forefront of several fields of research. At the same time, the divisions between the biological, physical and the digital worlds seem to be dissolving. Hence, opportunities for interdisciplinary research into memory are being created, between the life sciences, social sciences and physical sciences. Such research may benefit from immediate application into information management technology as a testbed. The paper describes one initiative, memories for life, as a potential common problem space for the various interested disciplines. PMID:16849265

  20. Real-Time Rocket/Vehicle System Integrated Health Management Laboratory For Development and Testing of Health Monitoring/Management Systems

    NASA Technical Reports Server (NTRS)

    Aguilar, R.

    2006-01-01

    Pratt & Whitney Rocketdyne has developed a real-time engine/vehicle system integrated health management laboratory, or testbed, for developing and testing health management system concepts. This laboratory simulates components of an integrated system such as the rocket engine, rocket engine controller, vehicle or test controller, as well as a health management computer on separate general purpose computers. These general purpose computers can be replaced with more realistic components such as actual electronic controllers and valve actuators for hardware-in-the-loop simulation. Various engine configurations and propellant combinations are available. Fault or failure insertion capability on-the-fly using direct memory insertion from a user console is used to test system detection and response. The laboratory is currently capable of simulating the flow-path of a single rocket engine but work is underway to include structural and multiengine simulation capability as well as a dedicated data acquisition system. The ultimate goal is to simulate as accurately and realistically as possible the environment in which the health management system will operate including noise, dynamic response of the engine/engine controller, sensor time delays, and asynchronous operation of the various components. The rationale for the laboratory is also discussed including limited alternatives for demonstrating the effectiveness and safety of a flight system.

  1. Case Study in Corporate Memory Recovery: Hanford Tank Farms Miscellaneous Underground Waste Storage Tanks - 15344

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washenfelder, D. J.; Johnson, J. M.; Turknett, J. C.

    In addition to managing the 177 underground waste storage tanks containing 212,000 m3 (56 million gal) of radioactive waste at the U. S. Department of Energy’s Hanford Site 200 Area Tank Farms, Washington River Protection Solutions LLC is responsible for managing numerous small catch tanks and special surveillance facilities. These are collectively known as “MUSTs” - Miscellaneous Underground Storage Tanks. The MUSTs typically collected drainage and flushes during waste transfer system piping changes; special surveillance facilities supported Tank Farm processes including post-World War II uranium recovery and later fission product recovery from tank wastes. Most were removed from service followingmore » deactivation of the single-shell tank system in 1980 and stabilized by pumping the remaining liquids from them. The MUSTs were isolated by blanking connecting transfer lines and adding weatherproofing to prevent rainwater entry. Over the next 30 years MUST operating records were dispersed into large electronic databases or transferred to the National Archives Regional Center in Seattle, Washington. During 2014 an effort to reacquire the historical bases for the MUSTs’ published waste volumes was undertaken. Corporate Memory Recovery from a variety of record sources allowed waste volumes to be initially determined for 21 MUSTs, and waste volumes to be adjusted for 37 others. Precursors and symptoms of Corporate Memory Loss were identified in the context of MUST records recovery.« less

  2. Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989

    NASA Technical Reports Server (NTRS)

    Raugh, Michael R. (Editor)

    1989-01-01

    Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis.

  3. System and method for memory allocation in a multiclass memory system

    DOEpatents

    Loh, Gabriel; Meswani, Mitesh; Ignatowski, Michael; Nutter, Mark

    2016-06-28

    A system for memory allocation in a multiclass memory system includes a processor coupleable to a plurality of memories sharing a unified memory address space, and a library store to store a library of software functions. The processor identifies a type of a data structure in response to a memory allocation function call to the library for allocating memory to the data structure. Using the library, the processor allocates portions of the data structure among multiple memories of the multiclass memory system based on the type of the data structure.

  4. Weighing the value of memory loss in the surgical evaluation of left temporal lobe epilepsy: A decision analysis

    PubMed Central

    Akama-Garren, Elliot H.; Bianchi, Matt T.; Leveroni, Catherine; Cole, Andrew J.; Cash, Sydney S.; Westover, M. Brandon

    2016-01-01

    SUMMARY Objectives Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. Methods We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. Results For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Significance Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. PMID:25244498

  5. Weighing the value of memory loss in the surgical evaluation of left temporal lobe epilepsy: a decision analysis.

    PubMed

    Akama-Garren, Elliot H; Bianchi, Matt T; Leveroni, Catherine; Cole, Andrew J; Cash, Sydney S; Westover, M Brandon

    2014-11-01

    Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  6. New data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamii, A.; Sakaguchi, H.; Takeda, H.

    1996-10-01

    This paper describes a new data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer at the Research Center for Nuclear Physics (RCNP) in Osaka, Japan. Data are acquired by a Creative Electronic Systems (CES) Starburst, which is a CAMAC auxiliary crate controller equipped with a Digital Equipment Corporation (DEC) J11 microprocessor., The data on the Starburst are transferred to a VME single-board computer. A VME reflective memory module broadcasts the data to other systems through a fiber-optic link. A data transfer rate of 2.0 Mbytes/s between VME modules has been achieved by reflective memories. This ratemore » includes the overhead of buffer management. The overall transfer rate, however, is limited by the performance of the Starburst to about 160 Kbytes/s at maximum. In order to further improve the system performance, the authors developed a new readout module called the Rapid Data Transfer Module (RDTM). RDTM`s transfer data from LeCroy PCOS III`s or 4298`s, and FERA/FERET`s directly to CES 8170 High Speed Memories (HSM) in VME crates. The data transfer rate of the RDTM from PCOS III`s to the HSM is about 4 Mbytes/s.« less

  7. Modeling human-flood interactions: Collective action and community resilience.

    NASA Astrophysics Data System (ADS)

    Yu, D. J.; Sangwan, N.; Sung, K.

    2016-12-01

    Stylized models of socio-hydrology have mainly used social memory aspects such as community awareness or sensitivity to connect hydrologic change and social response. However, social memory alone does not satisfactorily capture the details of how human behavior is translated into collective action for water resources governance. Nor is it the only mechanism by which the two-way feedbacks of socio-hydrology can be operationalized. This study contributes towards bridging of this gap by developing a stylized model of a human-flood system that includes two additional drivers of change: (1) institutions for collective action, and (2) connections to an external economic system. Motivated by the case of community-managed flood protection systems (polders) in coastal Bangladesh, we use the model to understand critical general features that affect long-term resilience of human-flood systems. Our findings suggest that occasional adversity can enhance long-term resilience. Allowing some hydrological variability to enter into the polder can increase its adaptive capacity and resilience through the preservation of social memory and institutions for collective action. Further, there are potential tradeoffs associated with optimization of flood resilience through structural measures. By reducing sensitivity to flooding, the system may become more fragile under the double impact of flooding and economic change

  8. Warning systems in risk management.

    PubMed

    Paté-Cornell, M E

    1986-06-01

    A method is presented here that allows probabilistic evaluation and optimization of warning systems, and comparison of their performance and cost-effectiveness with those of other means of risk management. The model includes an assessment of the signals, and of human response, given the memory that people have kept of the quality of previous alerts. The trade-off between the rate of false alerts and the length of the lead time is studied to account for the long-term effects of "crying wolf" and the effectiveness of emergency actions. An explicit formulation of the system's benefits, including inputs from a signal model, a response model, and a consequence model, is given to allow optimization of the warning threshold and of the system's sensitivity.

  9. Goal-Driven Autonomy and Robust Architecture for Long-Duration Missions (Year 1: 1 July 2013 - 31 July 2014)

    DTIC Science & Technology

    2014-09-30

    Mental Domain = Ω Goal Management goal change goal input World =Ψ Memory Mission & Goals( ) World Model (-Ψ) Episodic Memory Semantic Memory ...Activations Trace Meta-Level Control Introspective Monitoring Memory Reasoning Trace ( ) Strategies Episodic Memory Metaknowledge Self Model...it is from incorrect or missing memory associations (i.e., indices). Similarly, correct information may exist in the input stream, but may not be

  10. Event-Based Prospective Memory Is Independently Associated with Self-Report of Medication Management in Older Adults

    PubMed Central

    Woods, Steven Paul; Weinborn, Michael; Maxwell, Brenton R.; Gummery, Alice; Mo, Kevin; Ng, Amanda R. J.; Bucks, Romola S.

    2014-01-01

    Background Identifying potentially modifiable risk factors for medication non-adherence in older adults is important in order to enhance screening and intervention efforts designed to improve medication-taking behavior and health outcomes. The current study sought to determine the unique contribution of prospective memory (i.e., “remembering to remember”) to successful self-reported medication management in older adults. Methods Sixty-five older adults with current medication prescriptions completed a comprehensive research evaluation of sociodemographic, psychiatric, and neurocognitive functioning, which included the Memory for Adherence to Medication Scale (MAMS), Prospective and Retrospective Memory Questionnaire (PRMQ), and a performance-based measure of prospective memory that measured both semantically-related and semantically-unrelated cue-intention (i.e., when-what) pairings. Results A series of hierarchical regressions controlling for biopsychosocial, other neurocognitive, and medication-related factors showed that elevated complaints on the PM scale of the PRMQ and worse performance on an objective semantically-unrelated event-based prospective memory task were independent predictors of poorer medication adherence as measured by the MAMS. Conclusions Prospective memory plays an important role in self-report of successful medication management among older adults. Findings may have implications for screening for older individuals “at risk” of non-adherence, as well as the development of prospective memory-based interventions to improve medication adherence and, ultimately, long-term health outcomes in older adults. PMID:24410357

  11. Everyday memory strategies for medication adherence.

    PubMed

    Boron, Julie Blaskewicz; Rogers, Wendy A; Fisk, Arthur D

    2013-01-01

    The need to manage chronic diseases and multiple medications increases for many older adults. Older adults are aware of memory declines and incorporate compensatory techniques. Everyday memory strategies used to support medication adherence were investigated. A survey distributed to 2000 households in the Atlanta metropolitan area yielded a 19.9% response rate including 354 older adults, aged 60-80 years. Older adults reported forgetting to take their medications, more so as their activity deviated from normal routines, such as unexpected activities. The majority of older adults endorsed at least two compensatory strategies, which they perceived to be more helpful in normal routines. Compensatory strategies were associated with higher education, more medications, having concern, and self-efficacy to take medications. As memory changes, older adults rely on multiple cues, and perceive reliance on multiple cues to be helpful. These data have implications for the design and successful implementation of medication reminder systems and interventions. Copyright © 2013 Mosby, Inc. All rights reserved.

  12. Lessons learned or lessons noted: A retrospective case study of the stored organizational memory of the causes of mishaps in NASA

    NASA Astrophysics Data System (ADS)

    Miller, Susan Burgess

    This study of the National Aeronautics and Space Administration's (NASA) organizational memory explores how the root causes of NASA mishaps have changed from the creation of NASA in 1958 through 2002. Official Mishap Board Reports document in stored organizational memory the organization's analyses of the causes of the mishaps. Using Parsons' Social Action Theory for its theoretical frame, and the Schwandt Organizational Learning Systems Model as the theoretical lens, this study provides a meta-analysis of 112 Type A mishap reports to discern what patterns in this stored organizational memory have emerged over time. Results indicate marked stability in the causes of mishaps until the latter portion of the study period. The theory of revolutionary change is considered to explain this apparent shift. Discussion includes the roles organizational culture, sensemaking and identity played in data collection and knowledge management challenges as well as in the lack of change in mishap causes.

  13. Improving Working Memory Efficiency by Reframing Metacognitive Interpretation of Task Difficulty

    ERIC Educational Resources Information Center

    Autin, Frederique; Croizet, Jean-Claude

    2012-01-01

    Working memory capacity, our ability to manage incoming information for processing purposes, predicts achievement on a wide range of intellectual abilities. Three randomized experiments (N = 310) tested the effectiveness of a brief psychological intervention designed to boost working memory efficiency (i.e., state working memory capacity) by…

  14. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  15. Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Llames, Rene Lim

    1991-01-01

    Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.

  16. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  17. NASA Goddard Space Flight Center Robotic Processing System Program Automation Systems, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form. Some of the areas covered include: (1) mission requirements; (2) automation management system; (3) Space Transportation System (STS) Hitchhicker Payload; (4) Spacecraft Command Language (SCL) scripts; (5) SCL software components; (6) RoMPS EasyLab Command & Variable summary for rack stations and annealer module; (7) support electronics assembly; (8) SCL uplink packet definition; (9) SC-4 EasyLab System Memory Map; (10) Servo Axis Control Logic Suppliers; and (11) annealing oven control subsystem.

  18. Knowledge management for chronic patient control and monitoring

    NASA Astrophysics Data System (ADS)

    Pedreira, Nieves; Aguiar-Pulido, Vanessa; Dorado, Julián; Pazos, Alejandro; Pereira, Javier

    2014-10-01

    Knowledge Management (KM) can be seen as the process of capturing, developing, sharing, and effectively using organizational knowledge. In this context, the work presented here proposes a KM System to be used in the scope of chronic patient control and monitoring for distributed research projects. It was designed in order to enable communication between patient and doctors, as well as to be usedbythe researchers involved in the project for its management. The proposed model integrates all the information concerning every patient and project management tasks in the Institutional Memory of a KMSystem and uses an ontology to maintain the information and its categorization independently. Furthermore, taking the philosophy of intelligent agents, the system will interact with the user to show him the information according to his preferences and access rights. Finally, three different scenarios of application are described.

  19. Rocket Engine Health Management: Early Definition of Critical Flight Measurements

    NASA Technical Reports Server (NTRS)

    Christenson, Rick L.; Nelson, Michael A.; Butas, John P.

    2003-01-01

    The NASA led Space Launch Initiative (SLI) program has established key requirements related to safety, reliability, launch availability and operations cost to be met by the next generation of reusable launch vehicles. Key to meeting these requirements will be an integrated vehicle health management ( M) system that includes sensors, harnesses, software, memory, and processors. Such a system must be integrated across all the vehicle subsystems and meet component, subsystem, and system requirements relative to fault detection, fault isolation, and false alarm rate. The purpose of this activity is to evolve techniques for defining critical flight engine system measurements-early within the definition of an engine health management system (EHMS). Two approaches, performance-based and failure mode-based, are integrated to provide a proposed set of measurements to be collected. This integrated approach is applied to MSFC s MC-1 engine. Early identification of measurements supports early identification of candidate sensor systems whose design and impacts to the engine components must be considered in engine design.

  20. Implementing a bubble memory hierarchy system

    NASA Technical Reports Server (NTRS)

    Segura, R.; Nichols, C. D.

    1979-01-01

    This paper reports on implementation of a magnetic bubble memory in a two-level hierarchial system. The hierarchy used a major-minor loop device and RAM under microprocessor control. Dynamic memory addressing, dual bus primary memory, and hardware data modification detection are incorporated in the system to minimize access time. It is the objective of the system to incorporate the advantages of bipolar memory with that of bubble domain memory to provide a smart, optimal memory system which is easy to interface and independent of user's system.

  1. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  2. A review of visual memory capacity: Beyond individual items and towards structured representations

    PubMed Central

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.

    2012-01-01

    Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system - going beyond quantifying how many items can be remembered, and moving towards structured representations - but how we model memory systems and memory processes. PMID:21617025

  3. Department of Defense In-House RDT and E Activities: Management Analysis Report for Fiscal Year 1993

    DTIC Science & Technology

    1994-11-01

    A worldwide unique lab because it houses a high - speed modeling and simulation system, a prototype...E Division, San Diego, CA: High Performance Computing Laboratory providing a wide range of advanced computer systems for the scientific investigation...Machines CM-200 and a 256-node Thinking Machines CM-S. The CM-5 is in a very large memory, ( high performance 32 Gbytes, >4 0 OFlop) coafiguration,

  4. Hemiboreal forest: natural disturbances and the importance of ecosystem legacies to management

    Treesearch

    Kalev Jogiste; Henn Korjus; John Stanturf; Lee E. Frelich; Endijs Baders; Janis Donis; Aris Jansons; Ahto Kangur; Kajar Koster; Diana Laarmann; Tiit Maaten; Vitas Marozas; Marek Metslaid; Kristi Nigul; Olga Polyachenko; Tiit Randveer; Floortje Vodde

    2017-01-01

    The condition of forest ecosystems depends on the temporal and spatial pattern of management interventions and natural disturbances. Remnants of previous conditions persisting after disturbances, or ecosystem legacies, collectively comprise ecosystem memory. Ecosystem memory in turn contributes to resilience and possibilities of ecosystem reorganization...

  5. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  6. A processing architecture for associative short-term memory in electronic noses

    NASA Astrophysics Data System (ADS)

    Pioggia, G.; Ferro, M.; Di Francesco, F.; DeRossi, D.

    2006-11-01

    Electronic nose (e-nose) architectures usually consist of several modules that process various tasks such as control, data acquisition, data filtering, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, and fuzzy rules used to implement such tasks may lead to issues concerning module interconnection and cooperation. Moreover, a new learning phase is mandatory once new measurements have been added to the dataset, thus causing changes in the previously derived model. Consequently, if a loss in the previous learning occurs (catastrophic interference), real-time applications of e-noses are limited. To overcome these problems this paper presents an architecture for dynamic and efficient management of multi-transducer data processing techniques and for saving an associative short-term memory of the previously learned model. The architecture implements an artificial model of a hippocampus-based working memory, enabling the system to be ready for real-time applications. Starting from the base models available in the architecture core, dedicated models for neurons, maps and connections were tailored to an artificial olfactory system devoted to analysing olive oil. In order to verify the ability of the processing architecture in associative and short-term memory, a paired-associate learning test was applied. The avoidance of catastrophic interference was observed.

  7. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  8. Image-based information, communication, and retrieval

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1980-01-01

    IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.

  9. Multipurpose panel, phase 1, study report. [display utilizing multiplexing and digital techniques

    NASA Technical Reports Server (NTRS)

    Parkin, W.

    1975-01-01

    The feasibility of a multipurpose panel which provides a programmable electronic display for changeable panel nomenclature, multiplexes similar indicator display signals to the signal display, and demultiplexes command signals is examined. Topics discussed include: electronic display technology, miniaturized electronic and memory devices, and data management systems which employ digital address and multiplexing.

  10. Managing Complexity: Impact of Organization and Processing Style on Nonverbal Memory in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tsatsanis, Katherine D.; Noens, Ilse L. J.; Illmann, Cornelia L.; Pauls, David L.; Volkmar, Fred R.; Schultz, Robert T.; Klin, Ami

    2011-01-01

    The contributions of cognitive style and organization to processing and recalling a complex novel stimulus were examined by comparing the Rey Osterrieth Complex Figure (ROCF) test performance of children, adolescents, and adults with ASD to clinical controls (CC) and non-impaired controls (NC) using the "Developmental Scoring System."…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Zhang, Zhao

    With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less

  12. Method and apparatus for faulty memory utilization

    DOEpatents

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2016-04-19

    A method for faulty memory utilization in a memory system includes: obtaining information regarding memory health status of at least one memory page in the memory system; determining an error tolerance of the memory page when the information regarding memory health status indicates that a failure is predicted to occur in an area of the memory system affecting the memory page; initiating a migration of data stored in the memory page when it is determined that the data stored in the memory page is non-error-tolerant; notifying at least one application regarding a predicted operating system failure and/or a predicted application failure when it is determined that data stored in the memory page is non-error-tolerant and cannot be migrated; and notifying at least one application regarding the memory failure predicted to occur when it is determined that data stored in the memory page is error-tolerant.

  13. Wavelength assignment algorithm considering the state of neighborhood links for OBS networks

    NASA Astrophysics Data System (ADS)

    Tanaka, Yu; Hirota, Yusuke; Tode, Hideki; Murakami, Koso

    2005-10-01

    Recently, Optical WDM technology is introduced into backbone networks. On the other hand, as the future optical switching scheme, Optical Burst Switching (OBS) systems become a realistic solution. OBS systems do not consider buffering in intermediate nodes. Thus, it is an important issue to avoid overlapping wavelength reservation between partially interfered paths. To solve this problem, so far, the wavelength assignment scheme which has priority management tables has been proposed. This method achieves the reduction of burst blocking probability. However, this priority management table requires huge memory space. In this paper, we propose a wavelength assignment algorithm that reduces both the number of priority management tables and burst blocking probability. To reduce priority management tables, we allocate and manage them for each link. To reduce burst blocking probability, our method announces information about the change of their priorities to intermediate nodes. We evaluate its performance in terms of the burst blocking probability and the reduction rate of priority management tables.

  14. Working Memory Systems in the Rat.

    PubMed

    Bratch, Alexander; Kann, Spencer; Cain, Joshua A; Wu, Jie-En; Rivera-Reyes, Nilda; Dalecki, Stefan; Arman, Diana; Dunn, Austin; Cooper, Shiloh; Corbin, Hannah E; Doyle, Amanda R; Pizzo, Matthew J; Smith, Alexandra E; Crystal, Jonathon D

    2016-02-08

    A fundamental feature of memory in humans is the ability to simultaneously work with multiple types of information using independent memory systems. Working memory is conceptualized as two independent memory systems under executive control [1, 2]. Although there is a long history of using the term "working memory" to describe short-term memory in animals, it is not known whether multiple, independent memory systems exist in nonhumans. Here, we used two established short-term memory approaches to test the hypothesis that spatial and olfactory memory operate as independent working memory resources in the rat. In the olfactory memory task, rats chose a novel odor from a gradually incrementing set of old odors [3]. In the spatial memory task, rats searched for a depleting food source at multiple locations [4]. We presented rats with information to hold in memory in one domain (e.g., olfactory) while adding a memory load in the other domain (e.g., spatial). Control conditions equated the retention interval delay without adding a second memory load. In a further experiment, we used proactive interference [5-7] in the spatial domain to compromise spatial memory and evaluated the impact of adding an olfactory memory load. Olfactory and spatial memory are resistant to interference from the addition of a memory load in the other domain. Our data suggest that olfactory and spatial memory draw on independent working memory systems in the rat. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. War and remembrance: Combat exposure in young adulthood and memory function sixty years later.

    PubMed

    Nevarez, Michael D; Malone, Johanna C; Rentz, Dorene M; Waldinger, Robert J

    2017-01-01

    Identifying adaptive ways to cope with extreme stress is essential to promoting long-term health. Memory systems are highly sensitive to stress, and combat exposure during war has been shown to have deleterious effects on cognitive processes, such as memory, decades later. No studies have examined coping styles used by combat veterans and associations with later-life cognitive functioning. Defenses are coping mechanisms that manage difficult memories and feelings, with some more closely related to memory processes (e.g., suppression, repression). Utilizing a longitudinal database, we assessed how reliance on certain defense mechanisms after World War II combat exposure could affect cognitive health 60years later. Data spanning 75years were available on 71 men who had post-war assessment of combat exposure, defense mechanism ratings (ages 19-50), and late-life neuropsychological testing. Interaction models of combat exposure with defenses predicting late-life memory were examined. In bivariate analyses, greater reliance on suppression correlated with worse memory performance (r=-0.30, p=.01), but greater reliance on repression did not. Greater reliance on suppression strengthened the link between combat exposure and worse memory in late life (R 2 =0.24, p<.001). In contrast, greater reliance on repression attenuated the link between combat exposure and poorer late-life memory (R 2 =0.19, p<.001). Results suggest that coping styles may affect the relationship between early-adult stress and late-life cognition. Findings highlight the importance of understanding how coping styles may impact cognitive functioning as people move through adult life. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Persistent impairment in working memory following severe hyperglycemia in newly diagnosed type 2 diabetes.

    PubMed

    Cerasuolo, Joseph; Izzo, Anthony

    2017-01-01

    Acute hyperglycemia has been shown to cause cognitive impairments in animal models. There is growing appreciation of the numerous effects of hyperglycemia on neuronal function as well as blood-brain barrier function. In humans, hypoglycemia is well known to cause cognitive deficits acutely, but hyperglycemia has been less well studied. We present a case of selective neurocognitive deficits in the setting of acute hyperglycemia. A 60-year-old man was admitted to the hospital for an episode of acute hyperglycemia in the setting of newly diagnosed diabetes mellitus precipitated by steroid use. He was managed with insulin therapy and discharged home, and later, presented with complaints of memory impairment. Deficits included impairment in his declarative and working memory, to the point of significant impairment in his overall functioning. The patient had no structural lesions on MRI imaging of the brain or other systemic illnesses to explain his specific deficits. We suggest that his acute hyperglycemia may have caused neurological injury, and may be responsible for our patient's memory complaints. Acute hyperglycemia has been associated with poor outcomes in several different central nervous system injuries including cerebrovascular accident and hypoxic injury.Hyperglycemia is responsible for accumulation of reactive oxygen species in the brain, resulting in advanced glycosylated end products and a proinflammatory response that may lead to cellular injury.Further research is needed to define the impact of both acute and chronic hyperglycemia on cognitive impairment and memory.

  17. Associative programming language and virtual associative access manager

    NASA Technical Reports Server (NTRS)

    Price, C.

    1978-01-01

    APL provides convenient associative data manipulation functions in a high level language. Six statements were added to PL/1 via a preprocessor: CREATE, INSERT, FIND, FOR EACH, REMOVE, and DELETE. They allow complete control of all data base operations. During execution, data base management programs perform the functions required to support the APL language. VAAM is the data base management system designed to support the APL language. APL/VAAM is used by CADANCE, an interactive graphic computer system. VAAM is designed to support heavily referenced files. Virtual memory files, which utilize the paging mechanism of the operating system, are used. VAAM supports a full network data structure. The two basic blocks in a VAAM file are entities and sets. Entities are the basic information element and correspond to PL/1 based structures defined by the user. Sets contain the relationship information and are implemented as arrays.

  18. Conscious and Unconscious Memory Systems

    PubMed Central

    Squire, Larry R.; Dede, Adam J.O.

    2015-01-01

    The idea that memory is not a single mental faculty has a long and interesting history but became a topic of experimental and biologic inquiry only in the mid-20th century. It is now clear that there are different kinds of memory, which are supported by different brain systems. One major distinction can be drawn between working memory and long-term memory. Long-term memory can be separated into declarative (explicit) memory and a collection of nondeclarative (implicit) forms of memory that include habits, skills, priming, and simple forms of conditioning. These memory systems depend variously on the hippocampus and related structures in the parahippocampal gyrus, as well as on the amygdala, the striatum, cerebellum, and the neocortex. This work recounts the discovery of declarative and nondeclarative memory and then describes the nature of declarative memory, working memory, nondeclarative memory, and the relationship between memory systems. PMID:25731765

  19. Improving family medicine resident training in dementia care: an experiential learning opportunity in Primary Care Collaborative Memory Clinics.

    PubMed

    Lee, Linda; Weston, W Wayne; Hillier, Loretta; Archibald, Douglas; Lee, Joseph

    2018-06-21

    Family physicians often find themselves inadequately prepared to manage dementia. This article describes the curriculum for a resident training intervention in Primary Care Collaborative Memory Clinics (PCCMC), outlines its underlying educational principles, and examines its impact on residents' ability to provide dementia care. PCCMCs are family physician-led interprofessional clinic teams that provide evidence-informed comprehensive assessment and management of memory concerns. Within PCCMCs residents learn to apply a structured approach to assessment, diagnosis, and management; training consists of a tutorial covering various topics related to dementia followed by work-based learning within the clinic. Significantly more residents who trained in PCCMCs (sample = 98), as compared to those in usual training programs (sample = 35), reported positive changes in knowledge, ability, and confidence in ability to assess and manage memory problems. The PCCMC training intervention for family medicine residents provides a significant opportunity for residents to learn about best clinical practices and interprofessional care needed for optimal dementia care integrated within primary care practice.

  20. SAR processing on the MPP

    NASA Technical Reports Server (NTRS)

    Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.

    1981-01-01

    The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.

  1. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  2. An Investigation of Unified Memory Access Performance in CUDA

    PubMed Central

    Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin

    2015-01-01

    Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668

  3. Ariane 5-ALF: Evolution of the Ariane 5 Data Handling System

    NASA Astrophysics Data System (ADS)

    Notebaert, O.; Stransky, Arnaud; Corin, Hans; Hult, Torbjorn; Bonnerot, Georges-Albert

    2004-06-01

    In the coming years, the Ariane 5 On-Board-Computer (OBC) will handle missions and performances enhancements together with the need for significantly reducing costs and the replacement of obsolescent components. The OBC evolution is naturally driven by these factors, but also needs to consider the SW system compliance. Indeed, it would be a major concern that the necessary change of the underlying HW should imply new development of the flight software, mission database and ground control system.The Ariane 5 SW uses ADA language, which enables verifiable definition of the interfaces and provides a standardized level of the real-time behavior. To enforce portability, it has a layered architecture that clearly separates application SW and data from the lower level software. In addition, the on-board mission data is managed thanks to the extraction of an image of the systems database located in a structured memory area (the exchange memory). Used for all interchanges between the system application software and the launcher's subsystems and peripherals, the exchange memory is the virtual view of the Ariane 5 system from the flight SW standpoint. Thanks to these early architectural and structural choices, portability on future hardware is theoretically guaranteed, whenever the exchange memory data structures and the service layer interfaces remains stable. The ALF working group has defined and manufactured a mock-up that fulfils these architectural constraints with a completely new on-board computer featuring improvements such as the microprocessor replacement as well as an advanced integrated I/O controller for access to the system data bus. Lower level SW has been prototyped on this new hardware in order to fulfill the same level of services as the current one while completely hiding the underlying HW/SW implementation to the rest of the system. Functional and performance evaluation of this platform consolidated at system level will show the potential benefits and the limits of such approach.

  4. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  5. Community-based memorials to September 11, 2001: environmental stewardship as memory work

    Treesearch

    Erika S. Svendsen; Lindsay K. Campbell

    2014-01-01

    This chapter investigates how people use trees, parks, gardens, and other natural resources as raw materials in and settings for memorials to September 11, 2001. In particular, we focus on 'found space living memorials', which we define as sites that are community-managed, re-appropriated from their prior use, often carved out of the public right-of-way, and...

  6. The MNESIS model: Memory systems and processes, identity and future thinking.

    PubMed

    Eustache, Francis; Viard, Armelle; Desgranges, Béatrice

    2016-07-01

    The Memory NEo-Structural Inter-Systemic model (MNESIS; Eustache and Desgranges, Neuropsychology Review, 2008) is a macromodel based on neuropsychological data which presents an interactive construction of memory systems and processes. Largely inspired by Tulving's SPI model, MNESIS puts the emphasis on the existence of different memory systems in humans and their reciprocal relations, adding new aspects, such as the episodic buffer proposed by Baddeley. The more integrative comprehension of brain dynamics offered by neuroimaging has contributed to rethinking the existence of memory systems. In the present article, we will argue that understanding the concept of memory by dividing it into systems at the functional level is still valid, but needs to be considered in the light of brain imaging. Here, we reinstate the importance of this division in different memory systems and illustrate, with neuroimaging findings, the links that operate between memory systems in response to task demands that constrain the brain dynamics. During a cognitive task, these memory systems interact transiently to rapidly assemble representations and mobilize functions to propose a flexible and adaptative response. We will concentrate on two memory systems, episodic and semantic memory, and their links with autobiographical memory. More precisely, we will focus on interactions between episodic and semantic memory systems in support of 1) self-identity in healthy aging and in brain pathologies and 2) the concept of the prospective brain during future projection. In conclusion, this MNESIS global framework may help to get a general representation of human memory and its brain implementation with its specific components which are in constant interaction during cognitive processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    NASA Astrophysics Data System (ADS)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  8. Operational Exercise Integration Recommendations for DoD Cyber Ranges

    DTIC Science & Technology

    2015-08-05

    be the precision and recall of a security information and event management (SIEM) system ’s notifications of unauthorized access to that directory...network traffic, port scanning Deplete Resources TCP flooding, memory leak exploitation Injection Cross-site scripting attacks, SQL injection Deceptive...requirements for personnel development; tactics, techniques, and procedures (TTPs) devel- opment; and mission rehearsals . While unique in their own

  9. Multiple Memory Systems Are Unnecessary to Account for Infant Memory Development: An Ecological Model

    ERIC Educational Resources Information Center

    Rovee-Collier, Carolyn; Cuevas, Kimberly

    2009-01-01

    How the memory of adults evolves from the memory abilities of infants is a central problem in cognitive development. The popular solution holds that the multiple memory systems of adults mature at different rates during infancy. The "early-maturing system" (implicit or nondeclarative memory) functions automatically from birth, whereas the…

  10. Generic Entity Resolution in Relational Databases

    NASA Astrophysics Data System (ADS)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  11. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  12. DIGIMEN, optical mass memory investigations, volume 2

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The DIGIMEM phase of the Optical Mass Memory Investigation Program addressed problems related to the analysis, design, and implementation of a direct digital optical recorder/reproducer. Effort was placed on developing an operational archival mass storage system to support one or more key NASA missions. The primary activity of the DIGIMEM program phase was the design, fabrication, and test and evaluation of a breadboard digital optical recorder/reproducer. Starting with technology and subsystem perfected during the HOLOMEM program phase, a fully operational optical spot recording breadboard that met or exceeded all program goals was evaluated. A thorough evaluation of several high resolution electrophotographic recording films was performed and a preliminary data base management/end user requirements survey was completed.

  13. A VLSI VAX chip set

    NASA Astrophysics Data System (ADS)

    Johnson, W. N.; Herrick, W. V.; Grundmann, W. J.

    1984-10-01

    For the first time, VLSI technology is used to compress the full functinality and comparable performance of the VAX 11/780 super-minicomputer into a 1.2 M transistor microprocessor chip set. There was no subsetting of the 304 instruction set and the 17 data types, nor reduction in hardware support for the 4 Gbyte virtual memory management architecture. The chipset supports an integral 8 kbyte memory cache, a 13.3 Mbyte/s system bus, and sophisticated multiprocessing. High performance is achieved through microcode optimizations afforded by the large control store, tightly coupled address and data caches, the use of internal and external 32 bit datapaths, the extensive aplication of both microlevel and macrolevel pipelining, and the use of specialized hardware assists.

  14. Method for refreshing a non-volatile memory

    DOEpatents

    Riekels, James E.; Schlesinger, Samuel

    2008-11-04

    A non-volatile memory and a method of refreshing a memory are described. The method includes allowing an external system to control refreshing operations within the memory. The memory may generate a refresh request signal and transmit the refresh request signal to the external system. When the external system finds an available time to process the refresh request, the external system acknowledges the refresh request and transmits a refresh acknowledge signal to the memory. The memory may also comprise a page register for reading and rewriting a data state back to the memory. The page register may comprise latches in lieu of supplemental non-volatile storage elements, thereby conserving real estate within the memory.

  15. Interference due to shared features between action plans is influenced by working memory span.

    PubMed

    Fournier, Lisa R; Behmer, Lawrence P; Stubblefield, Alexandra M

    2014-12-01

    In this study, we examined the interactions between the action plans that we hold in memory and the actions that we carry out, asking whether the interference due to shared features between action plans is due to selection demands imposed on working memory. Individuals with low and high working memory spans learned arbitrary motor actions in response to two different visual events (A and B), presented in a serial order. They planned a response to the first event (A) and while maintaining this action plan in memory they then executed a speeded response to the second event (B). Afterward, they executed the action plan for the first event (A) maintained in memory. Speeded responses to the second event (B) were delayed when it shared an action feature (feature overlap) with the first event (A), relative to when it did not (no feature overlap). The size of the feature-overlap delay was greater for low-span than for high-span participants. This indicates that interference due to overlapping action plans is greater when fewer working memory resources are available, suggesting that this interference is due to selection demands imposed on working memory. Thus, working memory plays an important role in managing current and upcoming action plans, at least for newly learned tasks. Also, managing multiple action plans is compromised in individuals who have low versus high working memory spans.

  16. Cricket: A Mapped, Persistent Object Store

    NASA Technical Reports Server (NTRS)

    Shekita, Eugene; Zwilling, Michael

    1996-01-01

    This paper describes Cricket, a new database storage system that is intended to be used as a platform for design environments and persistent programming languages. Cricket uses the memory management primitives of the Mach operating system to provide the abstraction of a shared, transactional single-level store that can be directly accessed by user applications. In this paper, we present the design and motivation for Cricket. We also present some initial performance results which show that, for its intended applications, Cricket can provide better performance than a general-purpose database storage system.

  17. Operating systems and network protocols for wireless sensor networks.

    PubMed

    Dutta, Prabal; Dunkels, Adam

    2012-01-13

    Sensor network protocols exist to satisfy the communication needs of diverse applications, including data collection, event detection, target tracking and control. Network protocols to enable these services are constrained by the extreme resource scarcity of sensor nodes-including energy, computing, communications and storage-which must be carefully managed and multiplexed by the operating system. These challenges have led to new protocols and operating systems that are efficient in their energy consumption, careful in their computational needs and miserly in their memory footprints, all while discovering neighbours, forming networks, delivering data and correcting failures.

  18. CAMS as a tool for human factors research in spaceflight

    NASA Astrophysics Data System (ADS)

    Sauer, Juergen

    2004-01-01

    The paper reviews a number of research studies that were carried out with a PC-based task environment called Cabin Air Management System (CAMS) simulating the operation of a spacecraft's life support system. As CAMS was a multiple task environment, it allowed the measurement of performance at different levels. Four task components of different priority were embedded in the task environment: diagnosis and repair of system faults, maintaining atmospheric parameters in a safe state, acknowledgement of system alarms (reaction time), and keeping a record of critical system resources (prospective memory). Furthermore, the task environment permitted the examination of different task management strategies and changes in crew member state (fatigue, anxiety, mental effort). A major goal of the research programme was to examine how crew members adapted to various forms of sub-optimal working conditions, such as isolation and confinement, sleep deprivation and noise. None of the studies provided evidence for decrements in primary task performance. However, the results showed a number of adaptive responses of crew members to adjust to the different sub-optimal working conditions. There was evidence for adjustments in information sampling strategies (usually reductions in sampling frequency) as a result of unfavourable working conditions. The results also showed selected decrements in secondary task performance. Prospective memory seemed to be somewhat more vulnerable to sub-optimal working conditions than performance on the reaction time task. Finally, suggestions are made for future research with the CAMS environment.

  19. Generalized enhanced suffix array construction in external memory.

    PubMed

    Louza, Felipe A; Telles, Guilherme P; Hoffmann, Steve; Ciferri, Cristina D A

    2017-01-01

    Suffix arrays, augmented by additional data structures, allow solving efficiently many string processing problems. The external memory construction of the generalized suffix array for a string collection is a fundamental task when the size of the input collection or the data structure exceeds the available internal memory. In this article we present and analyze [Formula: see text] [introduced in CPM (External memory generalized suffix and [Formula: see text] arrays construction. In: Proceedings of CPM. pp 201-10, 2013)], the first external memory algorithm to construct generalized suffix arrays augmented with the longest common prefix array for a string collection. Our algorithm relies on a combination of buffers, induced sorting and a heap to avoid direct string comparisons. We performed experiments that covered different aspects of our algorithm, including running time, efficiency, external memory access, internal phases and the influence of different optimization strategies. On real datasets of size up to 24 GB and using 2 GB of internal memory, [Formula: see text] showed a competitive performance when compared to [Formula: see text] and [Formula: see text], which are efficient algorithms for a single string according to the related literature. We also show the effect of disk caching managed by the operating system on our algorithm. The proposed algorithm was validated through performance tests using real datasets from different domains, in various combinations, and showed a competitive performance. Our algorithm can also construct the generalized Burrows-Wheeler transform of a string collection with no additional cost except by the output time.

  20. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classesmore » is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.« less

  1. A general model for memory interference in a multiprocessor system with memory hierarchy

    NASA Technical Reports Server (NTRS)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  2. Solutions and debugging for data consistency in multiprocessors with noncoherent caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.

    1995-02-01

    We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less

  3. Evolving technologies drive the new roles of Biomedical Engineering.

    PubMed

    Frisch, P H; St Germain, J; Lui, W

    2008-01-01

    Rapidly changing technology coupled with the financial impact of organized health care, has required hospital Biomedical Engineering organizations to augment their traditional operational and business models to increase their role in developing enhanced clinical applications utilizing new and evolving technologies. The deployment of these technology based applications has required Biomedical Engineering organizations to re-organize to optimize the manner in which they provide and manage services. Memorial Sloan-Kettering Cancer Center has implemented a strategy to explore evolving technologies integrating them into enhanced clinical applications while optimally utilizing the expertise of the traditional Biomedical Engineering component (Clinical Engineering) to provide expanded support in technology / equipment management, device repair, preventive maintenance and integration with legacy clinical systems. Specifically, Biomedical Engineering is an integral component of the Medical Physics Department which provides comprehensive and integrated support to the Center in advanced physical, technical and engineering technology. This organizational structure emphasizes the integration and collaboration between a spectrum of technical expertise for clinical support and equipment management roles. The high cost of clinical equipment purchases coupled with the increasing cost of service has driven equipment management responsibilities to include significant business and financial aspects to provide a cost effective service model. This case study details the dynamics of these expanded roles, future initiatives and benefits for Biomedical Engineering and Memorial Sloan Kettering Cancer Center.

  4. Influence of personality and neuropsychological ability on social functioning and self-management in bipolar disorder.

    PubMed

    Vierck, Esther; Joyce, Peter R

    2015-10-30

    A majority of bipolar patients (BD) show functional difficulties even in remission. In recent years cognitive functions and personality characteristics have been associated with occupational and psychosocial outcomes, but findings are not consistent. We assessed personality and cognitive functioning through a range of tests in BD and control participants. Three cognitive domains-verbal memory, facial-executive, and spatial memory-were extracted by principal component analysis. These factors and selected personality dimensions were included in hierarchical regression analysis to predict psychosocial functioning and the use of self-management strategies while controlling for mood status. The best determinants of good psychosocial functioning were good verbal memory and high self-directedness. The use of self-management techniques was associated with a low level of harm-avoidance. Our findings indicate that strategies to improve memory and self-directedness may be useful for increasing functioning in individuals with bipolar disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. The POIS (Parkland On-Line Information System) Implementation of the IBM Health Care Support/Patient Care System

    PubMed Central

    Mishelevich, David J.; Hudson, Betty G.; Van Slyke, Donald; Mize, Elaine I.; Robinson, Anna L.; Brieden, Helen C.; Atkinson, Jack; Robertson, James

    1980-01-01

    The installation of major components of a comprehensive Hospital Information System (HIS) called POIS, the Parkland On-line Information System, including identified success factors is described for the Dallas County Hospital District (DCHD) known also as the Parkland Memorial Hospital. Installation of the on-line IBM Health Care Support (HCS) Registration and Admissions Packages occurred in 1976 and implementation of the HCS Patient Care System (PCS) began in 1977 which includes on-line support of health care areas such as nursing stations and ancillary areas. The Duke Hospital Information System (DHIS) is marketed as the IBM HCS/Patient Care System (PCS). DCHD was the validation site. POIS has order entry, result reporting and work management components. While most of the patient care components are currently installed for the inpatient service, the Laboratories are being installed for the outpatient and Emergency areas as well. The Clinic Appointment System developed at the University of Michigan is also installed. The HCS family of programs use DL/1 and CICS and were installed in the OS versions, currently running under MVS on an IBM 370/168 Model 3 with 8 megabytes of main memory. ImagesFigure 1-AFigure 1-B

  6. Hold-up power supply for flash memory

    NASA Technical Reports Server (NTRS)

    Ott, William E. (Inventor)

    2004-01-01

    A hold-up power supply for flash memory systems is provided. The hold-up power supply provides the flash memory with the power needed to temporarily operate when a power loss exists. This allows the flash memory system to complete any erasures and writes, and thus allows it to shut down gracefully. The hold-up power supply detects when a power loss on a power supply bus is occurring and supplies the power needed for the flash memory system to temporally operate. The hold-up power supply stores power in at least one capacitor. During normal operation, power from a high voltage supply bus is used to charge the storage capacitors. When a power supply loss is detected, the power supply bus is disconnected from the flash memory system. A hold-up controller controls the power flow from the storage capacitors to the flash memory system. The hold-up controller uses feedback to assure that the proper voltage is provided from the storage capacitors to the flash memory system. This power supplied by the storage capacitors allows the flash memory system to complete any erasures and writes, and thus allows the flash memory system to shut down gracefully.

  7. 76 FR 24409 - Proposed Amendment of Class E Airspace; Ava, MO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ...) at Bill Martin Memorial Airport, Ava, MO, has made this action necessary for the safety and management of Instrument Flight Rules (IFR) operations at Bill Martin Memorial Airport. DATES: Comments must... from 700 feet above the surface for standard instrument approach procedures at Bill Martin Memorial...

  8. Dairy Herd On-line Information System

    NASA Astrophysics Data System (ADS)

    Takahashi, Satoshi

    As the business circumstances have become worse, computational breeding management based on the scientific matters has been needed for dairy farming in our country. In this connection it was urgent to construct the system which provided data effectively used in the fields for dairy farmers. The Federation has executed to provide dairy farming technical data promptly through its own on-line network being composed of middle sized general-purpose computer (main memory : 5MB, and fixed disk : 1100MB) and 22 terminals.

  9. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  10. A 128K-bit CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K. H.; Wallace, R. W.; Robinson, C. R.

    1976-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. 8K-bit CCD shift register memories were used to construct a feasibility model 128K-bit buffer memory system. Peak power dissipation during a data transfer is less than 7 W., while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. Descriptions are provided of both the buffer memory system and a custom tester that was used to exercise the memory. The testing procedures and testing results are discussed. Suggestions are provided for further development with regards to the utilization of advanced versions of CCD memory devices to both simplified and expanded memory system applications.

  11. System and method for programmable bank selection for banked memory subsystems

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Hoenicke, Dirk; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  12. NASA Electronic Library System (NELS) optimization

    NASA Technical Reports Server (NTRS)

    Pribyl, William L.

    1993-01-01

    This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.

  13. Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-Processor vs. Multi-Threaded Shared Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Marquez, Andres; Choudhury, Sutanay

    2012-09-01

    Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less

  14. A microprocessor controlled pressure scanning system

    NASA Technical Reports Server (NTRS)

    Anderson, R. C.

    1976-01-01

    A microprocessor-based controller and data logger for pressure scanning systems is described. The microcomputer positions and manages data from as many as four 48-port electro-mechanical pressure scanners. The maximum scanning rate is 80 pressure measurements per second (20 ports per second on each of four scanners). The system features on-line calibration, position-directed data storage, and once-per-scan display in engineering units of data from a selected port. The system is designed to be interfaced to a facility computer through a shared memory. System hardware and software are described. Factors affecting measurement error in this type of system are also discussed.

  15. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    PubMed Central

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this division based on awareness is a useful taxonomy for memory tasks, the authors review the evidence that the unconscious character of implicit memory does not necessitate that it be treated as a separate system of human memory. They also argue that some implicit and explicit memory tasks share the same memory representations and that the important distinction is whether the task (implicit or explicit) requires the formation of a new association. The authors review and critique dissociations from the behavioral, amnesia, and neuroimaging literatures that have been advanced in support of separate explicit and implicit memory systems by highlighting contradictory evidence and by illustrating how the data can be accounted for using a simple computational memory model that assumes the same memory representation for those disparate tasks. PMID:19210052

  16. Formal verification of a set of memory management units

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.

    1992-01-01

    This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.

  17. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  18. [An atypical form of neurosarcoidosis].

    PubMed

    Quenardelle, V; Benmekhbi, M; Aupy, J; Dalvit, C; Hirsch, E; Benoilid, A

    2013-12-01

    Nervous system involvement occurs in 5 to 15% of the patients with sarcoidosis. Neurosarcoidosis remains very difficult to diagnose because clinical presentation and imaging characteristics lack specificity. We report a 26-year-old man who gradually developed headaches, memory disturbance and epilepsy. CT-scan and MRI showed a temporal-parietal cystic mass, secondary to a rare and focal form of hydrocephalus, called "trapped temporal horn" revealing neurosarcoidosis. The "entrapped temporal horn" is due to an obstacle on the cerebrospinal fluid pathway at the trigone of the lateral ventricle that seals off the temporal horn and the choroid plexus from the rest of the ventricular system. The obstacle is related to the granulomatous tissue of sarcoidosis. Therefore, the "trapped temporal horn" acts as a space occupying process, causing headaches, memory pain, hemiparesis, homonymous hemianopsia, and requires medico-surgical management. Copyright © 2013 Société nationale française de médecine interne (SNFMI). Published by Elsevier SAS. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, K.; Tsai, H.; Liu, Y. Y.

    Radio frequency identification (RFID) is one of today's most rapidly growing technologies in the automatic data collection industry. Although commercial applications are already widespread, the use of this technology for managing nuclear materials is only in its infancy. Employing an RFID system has the potential to offer an immense payback: enhanced safety and security, reduced need for manned surveillance, real-time access to status and event history data, and overall cost-effectiveness. The Packaging Certification Program (PCP) in the U.S. Department of Energy's (DOE's) Office of Environmental Management (EM), Office of Packaging and Transportation (EM-63), is developing an RFID system for nuclearmore » materials management. The system consists of battery-powered RFID tags with onboard sensors and memories, a reader network, application software, a database server and web pages. The tags monitor and record critical parameters, including the status of seals, movement of objects, and environmental conditions of the nuclear material packages in real time. They also provide instant warnings or alarms when preset thresholds for the sensors are exceeded. The information collected by the readers is transmitted to a dedicated central database server that can be accessed by authorized users across the DOE complex via a secured network. The onboard memory of the tags allows the materials manifest and event history data to reside with the packages throughout their life cycles in storage, transportation, and disposal. Data security is currently based on Advanced Encryption Standard-256. The software provides easy-to-use graphical interfaces that allow access to all vital information once the security and privilege requirements are met. An innovative scheme has been developed for managing batteries in service for more than 10 years without needing to be changed. A miniature onboard dosimeter is being developed for applications that require radiation surveillance. A field demonstration of the RFID system was recently conducted to assess its performance. The preliminary results of the demonstration are reported in this paper.« less

  20. Stress Effects on Multiple Memory System Interactions

    PubMed Central

    Ness, Deborah; Calabrese, Pasquale

    2016-01-01

    Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory systems, we review recent animal and human studies exploring the effects of stress on multiple memory systems. Apart from discussing the interaction between distinct memory systems in stressful situations, we will also outline the fundamental role of the amygdala in mediating such stress effects. Additionally, based on the methods applied in the herein discussed studies, we will discuss how memory translates into behaviour. PMID:27034845

  1. Transactive memory in organizational groups: the effects of content, consensus, specialization, and accuracy on group performance.

    PubMed

    Austin, John R

    2003-10-01

    Previous research on transactive memory has found a positive relationship between transactive memory system development and group performance in single project laboratory and ad hoc groups. Closely related research on shared mental models and expertise recognition supports these findings. In this study, the author examined the relationship between transactive memory systems and performance in mature, continuing groups. A group's transactive memory system, measured as a combination of knowledge stock, knowledge specialization, transactive memory consensus, and transactive memory accuracy, is positively related to group goal performance, external group evaluations, and internal group evaluations. The positive relationship with group performance was found to hold for both task and external relationship transactive memory systems.

  2. Health-care process improvement decisions: a systems perspective.

    PubMed

    Walley, Paul; Silvester, Kate; Mountford, Shaun

    2006-01-01

    The paper seeks to investigate decision-making processes within hospital improvement activity, to understand how performance measurement systems influence decisions and potentially lead to unsuccessful or unsustainable process changes. A longitudinal study over a 33-month period investigates key events, decisions and outcomes at one medium-sized hospital in the UK. Process improvement events are monitored using process control methods and by direct observation. The authors took a systems perspective of the health-care processes, ensuring that the impacts of decisions across the health-care supply chain were appropriately interpreted. The research uncovers the ways in which measurement systems disguise failed decisions and encourage managers to take a low-risk approach of "symptomatic relief" when trying to improve performance metrics. This prevents many managers from trying higher risk, sustainable process improvement changes. The behaviour of the health-care system is not understood by many managers and this leads to poor analysis of problem situations. Measurement using time-series methodologies, such as statistical process control are vital for a better understanding of the systems impact of changes. Senior managers must also be aware of the behavioural influence of similar performance measurement systems that discourage sustainable improvement. There is a risk that such experiences will tarnish the reputation of performance management as a discipline. Recommends process control measures as a way of creating an organization memory of how decisions affect performance--something that is currently lacking.

  3. Serotonin is critical for rewarded olfactory short-term memory in Drosophila.

    PubMed

    Sitaraman, Divya; LaFerriere, Holly; Birman, Serge; Zars, Troy

    2012-06-01

    The biogenic amines dopamine, octopamine, and serotonin are critical in establishing normal memories. A common view for the amines in insect memory performance has emerged in which dopamine and octopamine are largely responsible for aversive and appetitive memories. Examination of the function of serotonin begins to challenge the notion of one amine type per memory because altering serotonin function also reduces aversive olfactory memory and place memory levels. Could the function of serotonin be restricted to the aversive domain, suggesting a more specific dopamine/serotonin system interaction? The function of the serotonergic system in appetitive olfactory memory was examined. By targeting the tetanus toxin light chain (TNT) and the human inwardly rectifying potassium channel (Kir2.1) to the serotonin neurons with two different GAL4 driver combinations, the serotonergic system was inhibited. Additional use of the GAL80(ts1) system to control expression of transgenes to the adult stage of the life cycle addressed a potential developmental role of serotonin in appetitive memory. Reduction in appetitive olfactory memory performance in flies with these transgenic manipulations, without altering control behaviors, showed that the serotonergic system is also required for normal appetitive memory. Thus, serotonin appears to have a more general role in Drosophila memory, and implies an interaction with both the dopaminergic and octopaminergic systems.

  4. Promoting and maintaining diversity in contemporary hardwood forests: Confronting contemporary drivers of change and the loss of ecological memory

    Treesearch

    Christopher R. Webster; Yvette L. Dickinson; Julia I. Burton; Lee E. Frelich; Michael A. Jenkins; Christel C. Kern; Patricia Raymond; Michael R. Saunders; Michael B. Walters; John L. Willis

    2018-01-01

    Declines in the diversity of herbaceous and woody plant species in the understory of eastern North American hardwood forests are increasingly common. Forest managers are tasked with maintaining and/or promoting species diversity and resilience; however, the success of these efforts depends on a robust understanding of past and future system dynamics and identification...

  5. Effects of long memory in the order submission process on the properties of recurrence intervals of large price fluctuations

    NASA Astrophysics Data System (ADS)

    Meng, Hao; Ren, Fei; Gu, Gao-Feng; Xiong, Xiong; Zhang, Yong-Jie; Zhou, Wei-Xing; Zhang, Wei

    2012-05-01

    Understanding the statistical properties of recurrence intervals (also termed return intervals in econophysics literature) of extreme events is crucial to risk assessment and management of complex systems. The probability distributions and correlations of recurrence intervals for many systems have been extensively investigated. However, the impacts of microscopic rules of a complex system on the macroscopic properties of its recurrence intervals are less studied. In this letter, we adopt an order-driven stock model to address this issue for stock returns. We find that the distributions of the scaled recurrence intervals of simulated returns have a power-law scaling with stretched exponential cutoff and the intervals possess multifractal nature, which are consistent with empirical results. We further investigate the effects of long memory in the directions (or signs) and relative prices of the order flow on the characteristic quantities of these properties. It is found that the long memory in the order directions (Hurst index Hs) has a negligible effect on the interval distributions and the multifractal nature. In contrast, the power-law exponent of the interval distribution increases linearly with respect to the Hurst index Hx of the relative prices, and the singularity width of the multifractal nature fluctuates around a constant value when Hx<0.7 and then increases with Hx. No evident effects of Hs and Hx are found on the long memory of the recurrence intervals. Our results indicate that the nontrivial properties of the recurrence intervals of returns are mainly caused by traders' behaviors of persistently placing new orders around the best bid and ask prices.

  6. Memory of irrigation effects on hydroclimate and its modeling challenge

    NASA Astrophysics Data System (ADS)

    Chen, Fei; Xu, Xiaoyu; Barlage, Michael; Rasmussen, Roy; Shen, Shuanghe; Miao, Shiguang; Zhou, Guangsheng

    2018-06-01

    Irrigation modifies land-surface water and energy budgets, and also influences weather and climate. However, current earth-system models, used for weather prediction and climate projection, are still in their infancy stage to consider irrigation effects. This study used long-term data collected from two contrasting (irrigated and rainfed) nearby maize-soybean rotation fields, to study the effects of irrigation memory on local hydroclimate. For a 12 year average, irrigation decreases summer surface-air temperature by less than 1 °C and increases surface humidity by 0.52 g kg‑1. The irrigation cooling effect is more pronounced and longer lasting for maize than for soybean. Irrigation reduces maximum, minimum, and averaged temperature over maize by more than 0.5 °C for the first six days after irrigation, but its temperature effect over soybean is mixed and negligible two or three days after irrigation. Irrigation increases near-surface humidity over maize by about 1 g kg‑1 up to ten days and increases surface humidity over soybean (~ 0.8 g kg‑1) with a similar memory. These differing effects of irrigation memory on temperature and humidity are associated with respective changes in the surface sensible and latent heat fluxes for maize and soybean. These findings highlight great need and challenges for earth-system models to realistically simulate how irrigation effects vary with crop species and with crop growth stages, and to capture complex interactions between agricultural management and water-system components (crop transpiration, precipitation, river, reservoirs, lakes, groundwater, etc.) at various spatial and temporal scales.

  7. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    PubMed

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing information on dosage, pharmacological interactions, side effects and contraindications of medications.The major challenges for quality and risk management, for the heads of departments and the executive board is the implementation and support of the described actions and a sustained guidance of the staff involved in the modification management process. The global trigger tool is suitable for improving transparency and objectifying the frequency of medical errors.

  8. MEMORY MODULATION

    PubMed Central

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive evidence from both animal and human research indicates that emotionally significant experiences activate hormonal and brain systems that regulate the consolidation of newly acquired memories. These effects are integrated through noradrenergic activation of the basolateral amygdala which regulates memory consolidation via interactions with many other brain regions involved in consolidating memories of recent experiences. Modulatory systems not only influence neurobiological processes underlying the consolidation of new information, but also affect other mnemonic processes, including memory extinction, memory recall and working memory. In contrast to their enhancing effects on consolidation, adrenal stress hormones impair memory retrieval and working memory. Such effects, as with memory consolidation, require noradrenergic activation of the basolateral amygdala and interactions with other brain regions. PMID:22122145

  9. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  10. How Managers' everyday decisions create or destroy your company's strategy.

    PubMed

    Bower, Joseph L; Gilbert, Clark G

    2007-02-01

    Senior executives have long been frustrated by the disconnection between the plans and strategies they devise and the actual behavior of the managers throughout the company. This article approaches the problem from the ground up, recognizing that every time a manager allocates resources, that decision moves the company either into or out of alignment with its announced strategy. A well-known story--Intel's exit from the memory business--illustrates this point. When discussing what businesses Intel should be in, Andy Grove asked Gordon Moore what they would do if Intel were a company that they had just acquired. When Moore answered, "Get out of memory," they decided to do just that. It turned out, though, that Intel's revenues from memory were by this time only 4% of total sales. Intel's lower-level managers had already exited the business. What Intel hadn't done was to shut down the flow of research funding into memory (which was still eating up one-third of all research expenditures); nor had the company announced its exit to the outside world. Because divisional and operating managers-as well as customers and capital markets-have such a powerful impact on the realized strategy of the firm, senior management might consider focusing less on the company's formal strategy and more on the processes by which the company allocates resources. Top managers must know the track record of the people who are making resource allocation proposals; recognize the strategic issues at stake; reach down to operational managers to work across division lines; frame resource questions to reflect the corporate perspective, especially when large sums of money are involved and conditions are highly uncertain; and create a new context that allows top executives to circumvent the regular resource allocation process when necessary.

  11. Fast Initialization of Bubble-Memory Systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1986-01-01

    Improved scheme several orders of magnitude faster than normal initialization scheme. State-of-the-art commercial bubble-memory device used. Hardware interface designed connects controlling microprocessor to bubblememory circuitry. System software written to exercise various functions of bubble-memory system in comparison made between normal and fast techniques. Future implementations of approach utilize E2PROM (electrically-erasable programable read-only memory) to provide greater system flexibility. Fastinitialization technique applicable to all bubble-memory devices.

  12. The Sleep Elaboration-Awake Pruning (SEAP) theory of memory: long term memories grow in complexity during sleep and undergo selection while awake. Clinical, psychopharmacological and creative implications.

    PubMed

    Charlton, Bruce G; Andras, Peter

    2009-07-01

    Long term memory (LTM) systems need to be adaptive such that they enhance an organism's reproductive fitness and self-reproducing in order to maintain their complexity of communications over time in the face of entropic loss of information. Traditional 'representation-consolidation' accounts conceptualize memory adaptiveness as due to memories being 'representations' of the environment, and the longevity of memories as due to 'consolidation' processes. The assumption is that memory representations are formed while an animal is awake and interacting with the environment, and these memories are consolidated mainly while the animal is asleep. So the traditional view of memory is 'instructionist' and assumes that information is transferred from the environment into the brain. By contrast, we see memories as arising endogenously within the brain's LTM system mainly during sleep, to create complex but probably maladaptive memories which are then simplified ('pruned') and selected during the awake period. When awake the LTM system is brought into a more intense interaction with past and present experience. Ours is therefore a 'selectionist' account of memory, and could be termed the Sleep Elaboration-Awake Pruning (or SEAP) theory. The SEAP theory explains the longevity of memories in the face of entropy by the tendency for memories to grow in complexity during sleep; and explains the adaptiveness of memory by selection for consistency with perceptions and previous memories during the awake state. Sleep is therefore that behavioural state during which most of the internal processing of the system of LTM occurs; and the reason sleep remains poorly understood is that its primary activity is the expansion of long term memories. By re-conceptualizing the relationship between memory, sleep and the environment; SEAP provides a radically new framework for memory research, with implications for the measurement of memory and the design of empirical investigations in clinical, psychopharmacological and creative domains. For example, it would be predicted that states of insufficient alertness such as delirium would produce errors of commission (memory distortion and false memories, as with psychotic delusions), while sleep deprivation would produce errors of memory omission (memory loss). Ultimately, the main argument in favour of SEAP is that long term memory must be a complex adaptive system, and complex systems arise, are selected and sustained according to the principles of systems theory; and therefore LTM cannot be functioning in the way assumed by 'representation-consolidation' theories.

  13. A memory efficient user interface for CLIPS micro-computer applications

    NASA Technical Reports Server (NTRS)

    Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin

    1990-01-01

    The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.

  14. Ruggedized minicomputer hardware and software topics, 1981: Proceedings of the 4th ROLM MIL-SPEC Computer User's Group Conference

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Presentations of a conference on the use of ruggedized minicomputers are summarized. The following topics are discussed: (1) the role of minicomputers in the development and/or certification of commercial or military airplanes in both the United States and Europe; (2) generalized software error detection techniques; (3) real time software development tools; (4) a redundancy management research tool for aircraft navigation/flight control sensors; (5) extended memory management techniques using a high order language; and (6) some comments on establishing a system maintenance scheme. Copies of presentation slides are also included.

  15. The influence of cannabinoids on learning and memory processes of the dorsal striatum.

    PubMed

    Goodman, Jarid; Packard, Mark G

    2015-11-01

    Extensive evidence indicates that the mammalian endocannabinoid system plays an integral role in learning and memory. Our understanding of how cannabinoids influence memory comes predominantly from studies examining cognitive and emotional memory systems mediated by the hippocampus and amygdala, respectively. However, recent evidence suggests that cannabinoids also affect habit or stimulus-response (S-R) memory mediated by the dorsal striatum. Studies implementing a variety of maze tasks in rats indicate that systemic or intra-dorsolateral striatum infusions of cannabinoid receptor agonists or antagonists impair habit memory. In mice, cannabinoid 1 (CB1) receptor knockdown can enhance or impair habit formation, whereas Δ(9)THC tolerance enhances habit formation. Studies in human cannabis users also suggest an enhancement of S-R/habit memory. A tentative conclusion based on the available data is that acute disruption of the endocannabinoid system with either agonists or antagonists impairs, whereas chronic cannabinoid exposure enhances, dorsal striatum-dependent S-R/habit memory. CB1 receptors are required for multiple forms of striatal synaptic plasticity implicated in memory, including short-term and long-term depression. Interactions with the hippocampus-dependent memory system may also have a role in some of the observed effects of cannabinoids on habit memory. The impairing effect often observed with acute cannabinoid administration argues for cannabinoid-based treatments for human psychopathologies associated with a dysfunctional habit memory system (e.g. post-traumatic stress disorder and drug addiction/relapse). In addition, the enhancing effect of repeated cannabinoid exposure on habit memory suggests a novel neurobehavioral mechanism for marijuana addiction involving the dorsal striatum-dependent memory system. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A B; de Supinski, B; Mueller, F

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less

  17. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  18. Annual Research Review: The neurobehavioral development of multiple memory systems: implications for childhood and adolescent psychiatric disorders

    PubMed Central

    Goodman, Jarid; Marsh, Rachel; Peterson, Bradley S.; Packard, Mark G.

    2014-01-01

    Extensive evidence indicates that mammalian memory is organized into multiple brains systems, including a “cognitive” memory system that depends upon the hippocampus and a stimulus-response “habit” memory system that depends upon the dorsolateral striatum. Dorsal striatal-dependent habit memory may in part influence the development and expression of some human psychopathologies, particularly those characterized by strong habit-like behavioral features. The present review considers this hypothesis as it pertains to psychopathologies that typically emerge during childhood and adolescence. These disorders include Tourette syndrome, attention-deficit/hyperactivity disorder, obsessive-compulsive disorder, eating disorders, and autism spectrum disorders. Human and nonhuman animal research shows that the typical development of memory systems comprises the early maturation of striatal-dependent habit memory and the relatively late maturation of hippocampal-dependent cognitive memory. We speculate that the differing rates of development of these memory systems may in part contribute to the early emergence of habit-like symptoms in childhood and adolescence. In addition, abnormalities in hippocampal and striatal brain regions have been observed consistently in youth with these disorders, suggesting that the aberrant development of memory systems may also contribute to the emergence of habit-like symptoms as core pathological features of these illnesses. Considering these disorders within the context of multiple memory systems may help elucidate the pathogenesis of habit-like symptoms in childhood and adolescence, and lead to novel treatments that lessen the habit-like behavioral features of these disorders. PMID:24286520

  19. Quantification of the memory effect of steady-state currents from interaction-induced transport in quantum systems

    NASA Astrophysics Data System (ADS)

    Lai, Chen-Yen; Chien, Chih-Chun

    2017-09-01

    Dynamics of a system in general depends on its initial state and how the system is driven, but in many-body systems the memory is usually averaged out during evolution. Here, interacting quantum systems without external relaxations are shown to retain long-time memory effects in steady states. To identify memory effects, we first show quasi-steady-state currents form in finite, isolated Bose- and Fermi-Hubbard models driven by interaction imbalance and they become steady-state currents in the thermodynamic limit. By comparing the steady-state currents from different initial states or ramping rates of the imbalance, long-time memory effects can be quantified. While the memory effects of initial states are more ubiquitous, the memory effects of switching protocols are mostly visible in interaction-induced transport in lattices. Our simulations suggest that the systems enter a regime governed by a generalized Fick's law and memory effects lead to initial-state-dependent diffusion coefficients. We also identify conditions for enhancing memory effects and discuss possible experimental implications.

  20. Recollection of episodic memory within the medial temporal lobe: behavioural dissociations from other types of memory.

    PubMed

    Easton, Alexander; Eacott, Madeline J

    2010-12-31

    In recent years there has been significant debate about whether there is a single medial temporal lobe memory system or dissociable systems for episodic and other types of declarative memory. In addition there has been a similar debate over the dissociability of recollection and familiarity based processes in recognition memory. Here we present evidence from recent work using episodic memory tasks in animals that allows us to explore these issues in more depth. We review studies that demonstrate triple dissociations within the medial temporal lobe, with only the hippocampal system being necessary for episodic memory. Similarly we review behavioural evidence for a dissociation in a task of episodic memory in rats where animals with lesions of the fornix are only impaired at recollection of the episodic memory, not recognition within the same trial. This work, then, supports recent models of dissociable neural systems within the medial temporal lobe but also raises questions for future investigation about the interactions of these medial temporal lobe memory systems with other structures. Copyright © 2009 Elsevier B.V. All rights reserved.

  1. Gender differences in navigational memory: pilots vs. nonpilots.

    PubMed

    Verde, Paola; Piccardi, Laura; Bianchini, Filippo; Guariglia, Cecilia; Carrozzo, Paolo; Morgagni, Fabio; Boccia, Maddalena; Di Fiore, Giacomo; Tomao, Enrico

    2015-02-01

    The coding of space as near and far is not only determined by arm-reaching distance, but is also dependent on how the brain represents the extension of the body space. Recent reports suggest that the dissociation between reaching and navigational space is not limited to perception and action but also involves memory systems. It has been reported that gender differences emerged only in adverse learning conditions that required strong spatial ability. In this study we investigated navigational versus reaching memory in air force pilots and a control group without flight experience. We took into account temporal duration (working memory and long-term memory) and focused on working memory, which is considered critical in the gender differences literature. We found no gender effects or flight hour effects in pilots but observed gender effects in working memory (but not in learning and delayed recall) in the nonpilot population (Women's mean = 5.33; SD= 0.90; Men's mean = 5.54; SD= 0.90). We also observed a difference between pilots and nonpilots in the maintenance of on-line reaching information: pilots (mean = 5.85; SD=0.76) were more efficient than nonpilots (mean = 5.21; SD=0.83) and managed this type of information similarly to that concerning navigational space. In the navigational learning phase they also showed better navigational memory (mean = 137.83; SD=5.81) than nonpilots (mean = 126.96; SD=15.81) and were significantly more proficient than the latter group. There is no gender difference in a population of pilots in terms of navigational abilities, while it emerges in a control group without flight experience. We found also that pilots performed better than nonpilots. This study suggests that once selected, male and female pilots do not differ from each other in visuo-spatial abilities and spatial navigation.

  2. Guidelines for locoregional therapy in primary breast cancer in developing countries: The results of an expert panel at the 8th Annual Women's Cancer Initiative – Tata Memorial Hospital (WCI-TMH) Conference

    PubMed Central

    Munshi, Anusheel; Gupta, Sudeep; Anderson, Benjamin; Yarnold, John; Parmar, Vani; Jalali, Rakesh; Sharma, Suresh Chander; Desai, Sangeeta; Thakur, Meenakshi; Baijal, Gunjan; Sarin, Rajiv; Mittra, Indraneel; Ghosh, Jaya; Badwe, Rajendra

    2012-01-01

    Background: Limited guidelines exist for breast cancer management in developing countries. In this context, the Women's Cancer Initiative - Tata Memorial Hospital (WCI-TMH) organised its 8th Annual Conference to update guidelines in breast cancer. Materials and Methods: Appropriately formulated guideline questions on each topic and subtopic in the surgical, radiation and systemic management of primary breast cancer were developed by the scientific committee and shared with the guest faculty of the Conference. Majority of the questions had multiple choice answers. The opinion of the audience, comprising academic and community oncologists, was electronically cumulated, followed by focussed presentations by eminent national and international experts on each topic. The guidelines were finally developed through an expert panel that voted on each guideline question after all talks had been delivered and audience opinion elicited. Separate panels were constituted for locoregional and systemic therapy in primary breast cancer. Results: Based on the voting results of the expert panel, guidelines for locoregional therapy of breast cancer have been formulated. Voting patterns for each question are reported. Conclusions: The updated guidelines on locoregional management of primary breast cancer in the context of developing countries are presented in this article. These recommendations have been designed to allow centers in the developing world to improve the quality of care for breast cancer patients. PMID:22988354

  3. Multifaceted Prospective Memory Intervention to Improve Medication Adherence.

    PubMed

    Insel, Kathie C; Einstein, Gilles O; Morrow, Daniel G; Koerner, Kari M; Hepworth, Joseph T

    2016-03-01

    To test whether a multifaceted prospective memory intervention improved adherence to antihypertensive medications and to assess whether executive function and working memory processes moderated the intervention effects. Two-group longitudinal randomized control trial. Community. Individuals aged 65 and older without signs of dementia or symptoms of severe depression who were self-managing prescribed medication. After 4 weeks of initial adherence monitoring using a medication event monitoring system, individuals with 90% or less adherence were randomly assigned to groups. The prospective memory intervention was designed to provide strategies that switch older adults from relying on executive function and working memory processes (that show effects of cognitive aging) to mostly automatic associative processes (that are relatively spared with normal aging) for remembering to take medications. Strategies included establishing a routine, establishing cues strongly associated with medication taking actions, performing the action immediately upon thinking about it, using a medication organizer, and imagining medication taking to enhance encoding and improve cuing. There was significant improvement in adherence in the intervention group (57% at baseline to 78% after the intervention), but most of these gains were lost after 5 months. The control condition started at 68% and was stable during the intervention, but dropped to 62%. Executive function and working memory moderated the intervention effect, with the intervention producing greater benefit for those with lower executive function and working memory. The intervention improved adherence, but the benefits were not sustained. Further research is needed to determine how to sustain the substantial initial benefits. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  4. Stroboscopic visual training improves information encoding in short-term memory.

    PubMed

    Appelbaum, L Gregory; Cain, Matthew S; Schroeder, Julia E; Darling, Elise F; Mitroff, Stephen R

    2012-11-01

    The visual system has developed to transform an undifferentiated and continuous flow of information into discrete and manageable representations, and this ability rests primarily on the uninterrupted nature of the input. Here we explore the impact of altering how visual information is accumulated over time by assessing how intermittent vision influences memory retention. Previous work has shown that intermittent, or stroboscopic, visual training (i.e., practicing while only experiencing snapshots of vision) can enhance visual-motor control and visual cognition, yet many questions remain unanswered about the mechanisms that are altered. In the present study, we used a partial-report memory paradigm to assess the possible changes in visual memory following training under stroboscopic conditions. In Experiment 1, the memory task was completed before and immediately after a training phase, wherein participants engaged in physical activities (e.g., playing catch) while wearing either specialized stroboscopic eyewear or transparent control eyewear. In Experiment 2, an additional group of participants underwent the same stroboscopic protocol but were delayed 24 h between training and assessment, so as to measure retention. In comparison to the control group, both stroboscopic groups (immediate and delayed retest) revealed enhanced retention of information in short-term memory, leading to better recall at longer stimulus-to-cue delays (640-2,560 ms). These results demonstrate that training under stroboscopic conditions has the capacity to enhance some aspects of visual memory, that these faculties generalize beyond the specific tasks that were trained, and that trained improvements can be maintained for at least a day.

  5. Role of adult neurogenesis in hippocampal-cortical memory consolidation

    PubMed Central

    2014-01-01

    Acquired memory is initially dependent on the hippocampus (HPC) for permanent memory formation. This hippocampal dependency of memory recall progressively decays with time, a process that is associated with a gradual increase in dependency upon cortical structures. This process is commonly referred to as systems consolidation theory. In this paper, we first review how memory becomes hippocampal dependent to cortical dependent with an emphasis on the interactions that occur between the HPC and cortex during systems consolidation. We also review the mechanisms underlying the gradual decay of HPC dependency during systems consolidation from the perspective of memory erasures by adult hippocampal neurogenesis. Finally, we discuss the relationship between systems consolidation and memory precision. PMID:24552281

  6. Interference Conditions of the Reconsolidation Process in Humans: The Role of Valence and Different Memory Systems

    PubMed Central

    Fernández, Rodrigo S.; Bavassi, Luz; Kaczer, Laura; Forcato, Cecilia; Pedreira, María E.

    2016-01-01

    Following the presentation of a reminder, consolidated memories become reactivated followed by a process of re-stabilization, which is referred to as reconsolidation. The most common behavioral tool used to reveal this process is interference produced by new learning shortly after memory reactivation. Memory interference is defined as a decrease in memory retrieval, the effect is generated when new information impairs an acquired memory. In general, the target memory and the interference task used are the same. Here we investigated how different memory systems and/or their valence could produce memory reconsolidation interference. We showed that a reactivated neutral declarative memory could be interfered by new learning of a different neutral declarative memory. Then, we revealed that an aversive implicit memory could be interfered by the presentation of a reminder followed by a threatening social event. Finally, we showed that the reconsolidation of a neutral declarative memory is unaffected by the acquisition of an aversive implicit memory and conversely, this memory remains intact when the neutral declarative memory is used as interference. These results suggest that the interference of memory reconsolidation is effective when two task rely on the same memory system or both evoke negative valence. PMID:28066212

  7. Interference Conditions of the Reconsolidation Process in Humans: The Role of Valence and Different Memory Systems.

    PubMed

    Fernández, Rodrigo S; Bavassi, Luz; Kaczer, Laura; Forcato, Cecilia; Pedreira, María E

    2016-01-01

    Following the presentation of a reminder, consolidated memories become reactivated followed by a process of re-stabilization, which is referred to as reconsolidation. The most common behavioral tool used to reveal this process is interference produced by new learning shortly after memory reactivation. Memory interference is defined as a decrease in memory retrieval, the effect is generated when new information impairs an acquired memory. In general, the target memory and the interference task used are the same. Here we investigated how different memory systems and/or their valence could produce memory reconsolidation interference. We showed that a reactivated neutral declarative memory could be interfered by new learning of a different neutral declarative memory. Then, we revealed that an aversive implicit memory could be interfered by the presentation of a reminder followed by a threatening social event. Finally, we showed that the reconsolidation of a neutral declarative memory is unaffected by the acquisition of an aversive implicit memory and conversely, this memory remains intact when the neutral declarative memory is used as interference. These results suggest that the interference of memory reconsolidation is effective when two task rely on the same memory system or both evoke negative valence.

  8. Recollecting, recognizing, and other acts of remembering: an overview of human memory.

    PubMed

    LaVoie, Donna J; Cobia, Derin J

    2007-09-01

    The question of whether memory is important to human existence is simple to answer: life without memory would be devoid of any meaning. The question of what memory is, however, is much more difficult to answer. The main purpose of this article is to provide an overview of memory function, by drawing distinctions between different memory systems, specifically declarative (ie, conscious) versus nondeclarative (ie, nonconscious) memory systems. To distinguish between these larger systems and their various components, we include discussion of deficits in memory that occur as a consequence of brain injury and normative aging processes. Included in these descriptions is discussion of the neuroanatomical correlates of each memory component described to illustrate the importance of particular brain regions to different aspects of memory function.

  9. Dual redundant core memory systems

    NASA Technical Reports Server (NTRS)

    Hull, F. E.

    1972-01-01

    Electronic memory system consisting of series redundant drive switch circuits, triple redundant majority voted memory timing functions, and two data registers to provide functional dual redundancy is described. Signal flow through the circuits is illustrated and equence of events which occur within the memory system is explained.

  10. Memory and cognitive control circuits in mathematical cognition and learning.

    PubMed

    Menon, V

    2016-01-01

    Numerical cognition relies on interactions within and between multiple functional brain systems, including those subserving quantity processing, working memory, declarative memory, and cognitive control. This chapter describes recent advances in our understanding of memory and control circuits in mathematical cognition and learning. The working memory system involves multiple parietal-frontal circuits which create short-term representations that allow manipulation of discrete quantities over several seconds. In contrast, hippocampal-frontal circuits underlying the declarative memory system play an important role in formation of associative memories and binding of new and old information, leading to the formation of long-term memories that allow generalization beyond individual problem attributes. The flow of information across these systems is regulated by flexible cognitive control systems which facilitate the integration and manipulation of quantity and mnemonic information. The implications of recent research for formulating a more comprehensive systems neuroscience view of the neural basis of mathematical learning and knowledge acquisition in both children and adults are discussed. © 2016 Elsevier B.V. All rights reserved.

  11. Memory and cognitive control circuits in mathematical cognition and learning

    PubMed Central

    Menon, V.

    2018-01-01

    Numerical cognition relies on interactions within and between multiple functional brain systems, including those subserving quantity processing, working memory, declarative memory, and cognitive control. This chapter describes recent advances in our understanding of memory and control circuits in mathematical cognition and learning. The working memory system involves multiple parietal–frontal circuits which create short-term representations that allow manipulation of discrete quantities over several seconds. In contrast, hippocampal–frontal circuits underlying the declarative memory system play an important role in formation of associative memories and binding of new and old information, leading to the formation of long-term memories that allow generalization beyond individual problem attributes. The flow of information across these systems is regulated by flexible cognitive control systems which facilitate the integration and manipulation of quantity and mnemonic information. The implications of recent research for formulating a more comprehensive systems neuroscience view of the neural basis of mathematical learning and knowledge acquisition in both children and adults are discussed. PMID:27339012

  12. From Augustine of Hippo's Memory Systems to Our Modern Taxonomy in Cognitive Psychology and Neuroscience of Memory: A 16-Century Nap of Intuition before Light of Evidence.

    PubMed

    Cassel, Jean-Christophe; Cassel, Daniel; Manning, Lilianne

    2013-03-01

    Over the last half century, neuropsychologists, cognitive psychologists and cognitive neuroscientists interested in human memory have accumulated evidence showing that there is not one general memory function but a variety of memory systems deserving distinct (but for an organism, complementary) functional entities. The first attempts to organize memory systems within a taxonomic construct are often traced back to the French philosopher Maine de Biran (1766-1824), who, in his book first published in 1803, distinguished mechanical memory, sensitive memory and representative memory, without, however, providing any experimental evidence in support of his view. It turns out, however, that what might be regarded as the first elaborated taxonomic proposal is 14 centuries older and is due to Augustine of Hippo (354-430), also named St Augustine, who, in Book 10 of his Confessions, by means of an introspective process that did not aim at organizing memory systems, nevertheless distinguished and commented on sensible memory, intellectual memory, memory of memories, memory of feelings and passion, and memory of forgetting. These memories were envisaged as different and complementary instances. In the current study, after a short biographical synopsis of St Augustine, we provide an outline of the philosopher's contribution, both in terms of questions and answers, and focus on how this contribution almost perfectly fits with several viewpoints of modern psychology and neuroscience of memory about human memory functions, including the notion that episodic autobiographical memory stores events of our personal history in their what, where and when dimensions, and from there enables our mental time travel. It is not at all meant that St Augustine's elaboration was the basis for the modern taxonomy, but just that the similarity is striking, and that the architecture of our current viewpoints about memory systems might have preexisted as an outstanding intuition in the philosopher's mind.

  13. From Augustine of Hippo’s Memory Systems to Our Modern Taxonomy in Cognitive Psychology and Neuroscience of Memory: A 16-Century Nap of Intuition before Light of Evidence

    PubMed Central

    Cassel, Jean-Christophe; Cassel, Daniel; Manning, Lilianne

    2012-01-01

    Over the last half century, neuropsychologists, cognitive psychologists and cognitive neuroscientists interested in human memory have accumulated evidence showing that there is not one general memory function but a variety of memory systems deserving distinct (but for an organism, complementary) functional entities. The first attempts to organize memory systems within a taxonomic construct are often traced back to the French philosopher Maine de Biran (1766–1824), who, in his book first published in 1803, distinguished mechanical memory, sensitive memory and representative memory, without, however, providing any experimental evidence in support of his view. It turns out, however, that what might be regarded as the first elaborated taxonomic proposal is 14 centuries older and is due to Augustine of Hippo (354–430), also named St Augustine, who, in Book 10 of his Confessions, by means of an introspective process that did not aim at organizing memory systems, nevertheless distinguished and commented on sensible memory, intellectual memory, memory of memories, memory of feelings and passion, and memory of forgetting. These memories were envisaged as different and complementary instances. In the current study, after a short biographical synopsis of St Augustine, we provide an outline of the philosopher’s contribution, both in terms of questions and answers, and focus on how this contribution almost perfectly fits with several viewpoints of modern psychology and neuroscience of memory about human memory functions, including the notion that episodic autobiographical memory stores events of our personal history in their what, where and when dimensions, and from there enables our mental time travel. It is not at all meant that St Augustine’s elaboration was the basis for the modern taxonomy, but just that the similarity is striking, and that the architecture of our current viewpoints about memory systems might have preexisted as an outstanding intuition in the philosopher’s mind. PMID:25379224

  14. Experience of Data Handling with IPPM Payload

    NASA Astrophysics Data System (ADS)

    Errico, Walter; Tosi, Pietro; Ilstad, Jorgen; Jameux, David; Viviani, Riccardo; Collantoni, Daniele

    2010-08-01

    A simplified On-Board Data Handling system has been developed by CAEN AURELIA SPACE and ABSTRAQT as PUS-over-SpaceWire demonstration platform for the Onboard Payload Data Processing laboratory at ESTEC. The system is composed of three Leon2-based IPPM (Integrated Payload Processing Module) computers that play the roles of Instrument, Payload Data Handling Unit and Satellite Management Unit. Two PCs complete the test set-up simulating an external Memory Management Unit and the Ground Control Unit. Communication among units take place primarily through SpaceWire links; RMAP[2] protocol is used for configuration and housekeeping. A limited implementation of ECSS-E-70-41B Packet Utilisation Standard (PUS)[1] over CANbus and MIL-STD-1553B has been also realized. The Open Source RTEMS is running on the IPPM AT697E CPU as real-time operating system.

  15. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  16. Memory Dysfunction

    PubMed Central

    Matthews, Brandy R.

    2015-01-01

    Purpose of Review: This article highlights the dissociable human memory systems of episodic, semantic, and procedural memory in the context of neurologic illnesses known to adversely affect specific neuroanatomic structures relevant to each memory system. Recent Findings: Advances in functional neuroimaging and refinement of neuropsychological and bedside assessment tools continue to support a model of multiple memory systems that are distinct yet complementary and to support the potential for one system to be engaged as a compensatory strategy when a counterpart system fails. Summary: Episodic memory, the ability to recall personal episodes, is the subtype of memory most often perceived as dysfunctional by patients and informants. Medial temporal lobe structures, especially the hippocampal formation and associated cortical and subcortical structures, are most often associated with episodic memory loss. Episodic memory dysfunction may present acutely, as in concussion; transiently, as in transient global amnesia (TGA); subacutely, as in thiamine deficiency; or chronically, as in Alzheimer disease. Semantic memory refers to acquired knowledge about the world. Anterior and inferior temporal lobe structures are most often associated with semantic memory loss. The semantic variant of primary progressive aphasia (svPPA) is the paradigmatic disorder resulting in predominant semantic memory dysfunction. Working memory, associated with frontal lobe function, is the active maintenance of information in the mind that can be potentially manipulated to complete goal-directed tasks. Procedural memory, the ability to learn skills that become automatic, involves the basal ganglia, cerebellum, and supplementary motor cortex. Parkinson disease and related disorders result in procedural memory deficits. Most memory concerns warrant bedside cognitive or neuropsychological evaluation and neuroimaging to assess for specific neuropathologies and guide treatment. PMID:26039844

  17. Practical Verification & Safeguard Tools for C/C++

    DTIC Science & Technology

    2007-11-01

    735; RDDC Valcartier; novembre 2007. Ce document est le rapport final d’un projet de recherche qui a été mené en 2005-2006. Le but de ce projet... 13 2.8 On Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.9 Memory Management Problems... 13 2.9.1 Use of Freed Memory . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.9.2 Underallocated Memory for a

  18. Preventing messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A; Chen, Dong; Gooding, Thomas; Heidelberger, Philip; Parker, Jeff

    2014-01-14

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate and interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  19. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  20. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    ERIC Educational Resources Information Center

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this…

  1. Ferroelectric Memory Devices and a Proposed Standardized Test System Design

    DTIC Science & Technology

    1992-06-01

    positive clock transition. This provides automatic data protection in case of power loss. The device is being evaluated for applications such as automobile ...systems requiring nonvolatile memory and as these systems become more complex, the demand for reprogrammable nonvolatile memory increases. The...complexity and cost in making conventional nonvolatile memory reprogrammable also increases, so the potential for using ferroelectric memory as a replacement

  2. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  3. A wide bandwidth CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K.; Wallace, R. W.; Robinson, C. R.

    1978-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.

  4. Contrasting single and multi-component working-memory systems in dual tasking.

    PubMed

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-05-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. List Models of Procedure Learning

    NASA Technical Reports Server (NTRS)

    Matessa, Michael P.; Polson, Peter G.

    2005-01-01

    This paper presents a new theory of the initial stages of skill acquisition and then employs the theory to model current and future training programs for fight management systems (FMSs) in modern commercial airliners like the Boeing 777 and the Airbus A320. The theoretical foundations for the theory are a new synthesis of the literature on human memory and the latest version of the ACT-R theory of skill acquisition.

  6. Evaluation of the ACEC Benchmark Suite for Real-Time Applications

    DTIC Science & Technology

    1990-07-23

    1.0 benchmark suite waSanalyzed with respect to its measuring of Ada real-time features such as tasking, memory management, input/output, scheduling...and delay statement, Chapter 13 features , pragmas, interrupt handling, subprogram overhead, numeric computations etc. For most of the features that...meant for programming real-time systems. The ACEC benchmarks have been analyzed extensively with respect to their measuring of Ada real-time features

  7. From Brown-Peterson to continual distractor via operation span: A SIMPLE account of complex span.

    PubMed

    Neath, Ian; VanWormer, Lisa A; Bireta, Tamra J; Surprenant, Aimée M

    2014-09-01

    Three memory tasks-Brown-Peterson, complex span, and continual distractor-all alternate presentation of a to-be-remembered item and a distractor activity, but each task is associated with a different memory system, short-term memory, working memory, and long-term memory, respectively. SIMPLE, a relative local distinctiveness model, has previously been fit to data from both the Brown-Peterson and continual distractor tasks; here we use the same version of the model to fit data from a complex span task. Despite the many differences between the tasks, including unpredictable list length, SIMPLE fit the data well. Because SIMPLE posits a single memory system, these results constitute yet another demonstration that performance on tasks originally thought to tap different memory systems can be explained without invoking multiple memory systems.

  8. Transactive memory systems scale for couples: development and validation

    PubMed Central

    Hewitt, Lauren Y.; Roberts, Lynne D.

    2015-01-01

    People in romantic relationships can develop shared memory systems by pooling their cognitive resources, allowing each person access to more information but with less cognitive effort. Research examining such memory systems in romantic couples largely focuses on remembering word lists or performing lab-based tasks, but these types of activities do not capture the processes underlying couples’ transactive memory systems, and may not be representative of the ways in which romantic couples use their shared memory systems in everyday life. We adapted an existing measure of transactive memory systems for use with romantic couples (TMSS-C), and conducted an initial validation study. In total, 397 participants who each identified as being a member of a romantic relationship of at least 3 months duration completed the study. The data provided a good fit to the anticipated three-factor structure of the components of couples’ transactive memory systems (specialization, credibility and coordination), and there was reasonable evidence of both convergent and divergent validity, as well as strong evidence of test–retest reliability across a 2-week period. The TMSS-C provides a valuable tool that can quickly and easily capture the underlying components of romantic couples’ transactive memory systems. It has potential to help us better understand this intriguing feature of romantic relationships, and how shared memory systems might be associated with other important features of romantic relationships. PMID:25999873

  9. The relationships between memory systems and sleep stages.

    PubMed

    Rauchs, Géraldine; Desgranges, Béatrice; Foret, Jean; Eustache, Francis

    2005-06-01

    Sleep function remains elusive despite our rapidly increasing comprehension of the processes generating and maintaining the different sleep stages. Several lines of evidence support the hypothesis that sleep is involved in the off-line reprocessing of recently-acquired memories. In this review, we summarize the main results obtained in the field of sleep and memory consolidation in both animals and humans, and try to connect sleep stages with the different memory systems. To this end, we have collated data obtained using several methodological approaches, including electrophysiological recordings of neuronal ensembles, post-training modifications of sleep architecture, sleep deprivation and functional neuroimaging studies. Broadly speaking, all the various studies emphasize the fact that the four long-term memory systems (procedural memory, perceptual representation system, semantic and episodic memory, according to Tulving's SPI model; Tulving, 1995) benefit either from non-rapid eye movement (NREM) (not just SWS) or rapid eye movement (REM) sleep, or from both sleep stages. Tulving's classification of memory systems appears more pertinent than the declarative/non-declarative dichotomy when it comes to understanding the role of sleep in memory. Indeed, this model allows us to resolve several contradictions, notably the fact that episodic and semantic memory (the two memory systems encompassed in declarative memory) appear to rely on different sleep stages. Likewise, this model provides an explanation for why the acquisition of various types of skills (perceptual-motor, sensory-perceptual and cognitive skills) and priming effects, subserved by different brain structures but all designated by the generic term of implicit or non-declarative memory, may not benefit from the same sleep stages.

  10. BLACKCOMB2: Hardware-software co-design for non-volatile memory in exascale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudge, Trevor

    This work was part of a larger project, Blackcomb2, centered at Oak Ridge National Labs (Jeff Vetter PI) to investigate the opportunities for replacing or supplementing DRAM main memory with nonvolatile memory (NVmemory) in Exascale memory systems. The goal was to reduce the energy consumed by in future supercomputer memory systems and to improve their resiliency. Building on the accomplishments of the original Blackcomb Project, funded in 2010, the goal for Blackcomb2 was to identify, evaluate, and optimize the most promising emerging memory technologies, architecture hardware and software technologies, which are essential to provide the necessary memory capacity, performance, resilience,more » and energy efficiency in Exascale systems. Capacity and energy are the key drivers.« less

  11. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  12. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  13. Memory modulation across neural systems: intra-amygdala glucose reverses deficits caused by intraseptal morphine on a spatial task but not on an aversive task.

    PubMed

    McNay, E C; Gold, P E

    1998-05-15

    Based largely on dissociations of the effects of different lesions on learning and memory, memories for different attributes appear to be organized in independent neural systems. Results obtained with direct injections of drugs into one brain region at a time support a similar conclusion. The present experiments investigated the effects of simultaneous pharmacological manipulation of two neural systems, the amygdala and the septohippocampal system, to examine possible interactions of memory modulation across systems. Morphine injected into the medial septum impaired memory both for avoidance training and during spontaneous alternation. When glucose was concomitantly administered to the amygdala, glucose reversed the morphine-induced deficits in memory during alternation but not for avoidance training. These results suggest that the amygdala is involved in modulation of spatial memory processes and that direct injections of memory-modulating drugs into the amygdala do not always modulate memory for aversive events. These findings are contrary to predictions from the findings of lesion studies and of studies using direct injections of drugs into single brain areas. Thus, the independence of neural systems responsible for processing different classes of memory is less clear than implied by studies using lesions or injections of drugs into single brain areas.

  14. Memory dynamics under stress.

    PubMed

    Quaedflieg, Conny W E M; Schwabe, Lars

    2018-03-01

    Stressful events have a major impact on memory. They modulate memory formation in a time-dependent manner, closely linked to the temporal profile of action of major stress mediators, in particular catecholamines and glucocorticoids. Shortly after stressor onset, rapidly acting catecholamines and fast, non-genomic glucocorticoid actions direct cognitive resources to the processing and consolidation of the ongoing threat. In parallel, control of memory is biased towards rather rigid systems, promoting habitual forms of memory allowing efficient processing under stress, at the expense of "cognitive" systems supporting memory flexibility and specificity. In this review, we discuss the implications of this shift in the balance of multiple memory systems for the dynamics of the memory trace. Specifically, stress appears to hinder the incorporation of contextual details into the memory trace, to impede the integration of new information into existing knowledge structures, to impair the flexible generalisation across past experiences, and to hamper the modification of memories in light of new information. Delayed, genomic glucocorticoid actions might reverse the control of memory, thus restoring homeostasis and "cognitive" control of memory again.

  15. Load matters: neural correlates of verbal working memory in children with autism spectrum disorder.

    PubMed

    Vogan, Vanessa M; Francis, Kaitlyn E; Morgan, Benjamin R; Smith, Mary Lou; Taylor, Margot J

    2018-06-01

    Autism spectrum disorder (ASD) is a pervasive neurodevelopmental disorder characterised by diminished social reciprocity and communication skills and the presence of stereotyped and restricted behaviours. Executive functioning deficits, such as working memory, are associated with core ASD symptoms. Working memory allows for temporary storage and manipulation of information and relies heavily on frontal-parietal networks of the brain. There are few reports on the neural correlates of working memory in youth with ASD. The current study identified the neural systems underlying verbal working memory capacity in youth with and without ASD using functional magnetic resonance imaging (fMRI). Fifty-seven youth, 27 with ASD and 30 sex- and age-matched typically developing (TD) controls (9-16 years), completed a one-back letter matching task (LMT) with four levels of difficulty (i.e. cognitive load) while fMRI data were recorded. Linear trend analyses were conducted to examine brain regions that were recruited as a function of increasing cognitive load. We found similar behavioural performance on the LMT in terms of reaction times, but in the two higher load conditions, the ASD youth had lower accuracy than the TD group. Neural patterns of activations differed significantly between TD and ASD groups. In TD youth, areas classically used for working memory, including the lateral and medial frontal, as well as superior parietal brain regions, increased in activation with increasing task difficulty, while areas related to the default mode network (DMN) showed decreasing activation (i.e., deactivation). The youth with ASD did not appear to use this opposing cognitive processing system; they showed little recruitment of frontal and parietal regions across the load but did show similar modulation of the DMN. In a working memory task, where the load was manipulated without changing executive demands, TD youth showed increasing recruitment with increasing load of the classic fronto-parietal brain areas and decreasing involvement in default mode regions. In contrast, although they modulated the default mode network, youth with ASD did not show the modulation of increasing brain activation with increasing load, suggesting that they may be unable to manage increasing verbal information. Impaired verbal working memory in ASD would interfere with the youths' success academically and socially. Thus, determining the nature of atypical neural processing could help establish or monitor working memory interventions for ASD.

  16. Cancer immunotherapy and immunological memory.

    PubMed

    Murata, Kenji; Tsukahara, Tomohide; Torigoe, Toshihiko

    2016-01-01

    Human immunological memory is the key distinguishing hallmark of the adaptive immune system and plays an important role in the prevention of morbidity and the severity of infection. The differentiation system of T cell memory has been clarified using mouse models. However, the human T cell memory system has great diversity induced by natural antigens derived from many pathogens and tumor cells throughout life, and profoundly differs from the mouse memory system constructed using artificial antigens and transgenic T cells. We believe that only human studies can elucidate the human immune system. The importance of immunological memory in cancer immunotherapy has been pointed out, and the trafficking properties and long-lasting anti-tumor capacity of memory T cells play a crucial role in the control of malignant tumors. Adoptive cell transfer of less differentiated T cells has consistently demonstrated superior anti-tumor capacity relative to more differentiated T cells. Therefore, a human T cell population with the characteristics of stem cell memory is thought to be attractive for peptide vaccination and adoptive cell transfer. A novel human memory T cell population that we have identified is closer to the naive state than previous memory T cells in the T cell differentiation lineage, and has the characteristics of stem-like chemoresistance. Here we introduce this novel population and describe the fundamentals of immunological memory in cancer immunotherapy.

  17. Chemical Memory Reactions Induced Bursting Dynamics in Gene Expression

    PubMed Central

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems. PMID:23349679

  18. Chemical memory reactions induced bursting dynamics in gene expression.

    PubMed

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  19. The remains of the day in dissociative amnesia.

    PubMed

    Staniloiu, Angelica; Markowitsch, Hans J

    2012-04-10

    Memory is not a unity, but is divided along a content axis and a time axis, respectively. Along the content dimension, five long-term memory systems are described, according to their hierarchical ontogenetic and phylogenetic organization. These memory systems are assumed to be accompanied by different levels of consciousness. While encoding is based on a hierarchical arrangement of memory systems from procedural to episodic-autobiographical memory, retrieval allows independence in the sense that no matter how information is encoded, it can be retrieved in any memory system. Thus, we illustrate the relations between various long-term memory systems by reviewing the spectrum of abnormalities in mnemonic processing that may arise in the dissociative amnesia-a condition that is usually characterized by a retrieval blockade of episodic-autobiographical memories and occurs in the context of psychological trauma, without evidence of brain damage on conventional structural imaging. Furthermore, we comment on the functions of implicit memories in guiding and even adaptively molding the behavior of patients with dissociative amnesia and preserving, in the absence of autonoetic consciousness, the so-called "internal coherence of life".

  20. Multiple Memory Systems Are Unnecessary to Account for Infant Memory Development: An Ecological Model

    PubMed Central

    Rovee-Collier, Carolyn; Cuevas, Kimberly

    2009-01-01

    How the memory of adults evolves from the memory abilities of infants is a central problem in cognitive development. The popular solution holds that the multiple memory systems of adults mature at different rates during infancy. The early-maturing system (implicit or nondeclarative memory) functions automatically from birth, whereas the late-maturing system (explicit or declarative memory) functions intentionally, with awareness, from late in the first year. Data are presented from research on deferred imitation, sensory preconditioning, potentiation, and context for which this solution cannot account and present an alternative model that eschews the need for multiple memory systems. The ecological model of infant memory development (N. E. Spear, 1984) holds that members of all species are perfectly adapted to their niche at each point in ontogeny and exhibit effective, evolutionarily selected solutions to whatever challenges each new niche poses. Because adults and infants occupy different niches, what they perceive, learn, and remember about the same event differs, but their raw capacity to learn and remember does not. PMID:19209999

  1. The Remains of the Day in Dissociative Amnesia

    PubMed Central

    Staniloiu, Angelica; Markowitsch, Hans J.

    2012-01-01

    Memory is not a unity, but is divided along a content axis and a time axis, respectively. Along the content dimension, five long-term memory systems are described, according to their hierarchical ontogenetic and phylogenetic organization. These memory systems are assumed to be accompanied by different levels of consciousness. While encoding is based on a hierarchical arrangement of memory systems from procedural to episodic-autobiographical memory, retrieval allows independence in the sense that no matter how information is encoded, it can be retrieved in any memory system. Thus, we illustrate the relations between various long-term memory systems by reviewing the spectrum of abnormalities in mnemonic processing that may arise in the dissociative amnesia—a condition that is usually characterized by a retrieval blockade of episodic-autobiographical memories and occurs in the context of psychological trauma, without evidence of brain damage on conventional structural imaging. Furthermore, we comment on the functions of implicit memories in guiding and even adaptively molding the behavior of patients with dissociative amnesia and preserving, in the absence of autonoetic consciousness, the so-called “internal coherence of life”. PMID:24962768

  2. Developmental amnesia: Fractionation of developing memory systems.

    PubMed

    Temple, Christine M; Richardson, Paul

    2006-07-01

    Study of the developmental amnesias utilizing a cognitive neuropsychological methodology has highlighted the dissociations that may occur between the development of components of memory. M.M., a new case of developmental amnesia, was identified after screening from the normal population on cognitive and memory measures. Retrospective investigation found that he was of low birthweight. M.M. had impaired semantic memory for knowledge of facts and words. There was impaired episodic memory for words and stories but intact episodic memory for visual designs and features. This forms a double dissociation with Dr S. (Temple, 1992), who had intact verbal but impaired visual episodic memory. M.M. also had impaired autobiographical episodic memory. Nevertheless, learning over repeated trials occurred, consistent with previous theorizing that learning is not simply the effect of recurrent episodic memory. Nor is it the same as establishing semantic memory, since for M.M. semantic memory is also impaired. Within reading, there was an impaired lexico-semantic system, elevated levels of homophone confusion, but intact phonological reading, consistent with surface dyslexia and raising issues about the interrelationship of the semantic system and literacy development. The results are compatible with discrete semi-independent components within memory development, whereby deficits are associated with residual normality, but there may also be an explicit relationship between the semantic memory system and both vocabulary and reading acquisition.

  3. The Research on Linux Memory Forensics

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  4. Reimagining Reading: Creating a Classroom Culture That Embraces Independent Choice Reading

    ERIC Educational Resources Information Center

    Dickerson, Katie

    2015-01-01

    Many of us are plagued by negative memories of sustained silent reading. In some of these memories, we are the students, attempting to read a book that didn't hold our interest or trying to read over the din of our disengaged classmates. In other memories, we are the teachers, suffering through a ten-minute classroom management nightmare, deciding…

  5. Formal verification of an MMU and MMU cache

    NASA Technical Reports Server (NTRS)

    Schubert, E. T.

    1991-01-01

    We describe the formal verification of a hardware subsystem consisting of a memory management unit and a cache. These devices are verified independently and then shown to interact correctly when composed. The MMU authorizes memory requests and translates virtual addresses to real addresses. The cache improves performance by maintaining a LRU (least recently used) list from the memory resident segment table.

  6. Investigation of fast initialization of spacecraft bubble memory systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1984-01-01

    Bubble domain technology offers significant improvement in reliability and functionality for spacecraft onboard memory applications. In considering potential memory systems organizations, minimization of power in high capacity bubble memory systems necessitates the activation of only the desired portions of the memory. In power strobing arbitrary memory segments, a capability of fast turn on is required. Bubble device architectures, which provide redundant loop coding in the bubble devices, limit the initialization speed. Alternate initialization techniques are investigated to overcome this design limitation. An initialization technique using a small amount of external storage is demonstrated.

  7. Age effects on explicit and implicit memory

    PubMed Central

    Ward, Emma V.; Berry, Christopher J.; Shanks, David R.

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942

  8. Traces of Drosophila Memory

    PubMed Central

    Davis, Ronald L.

    2012-01-01

    Summary Studies using functional cellullar imaging of living flies have identified six memory traces that form in the olfactory nervous system after conditioning with odors. These traces occur in distinct nodes of the olfactory nervous system, form and disappear across different windows of time, and are detected in the imaged neurons as increased calcium influx or synaptic release in response to the conditioned odor. Three traces form at, or near acquisition and co-exist with short-term behavioral memory. One trace forms with a delay after learning and co-exists with intermediate-term behavioral memory. Two traces form many hours after acquisition and co-exist with long-term behavioral memory. The transient memory traces may support behavior across the time-windows of their existence. The experimental approaches for dissecting memory formation in the fly, ranging from the molecular to the systems, make it an ideal system for dissecting the logic by which the nervous system organizes and stores different temporal forms of memory. PMID:21482352

  9. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  10. Declarative and nondeclarative memory: multiple brain systems supporting learning and memory.

    PubMed

    Squire, L R

    1992-01-01

    Abstract The topic of multiple forms of memory is considered from a biological point of view. Fact-and-event (declarative, explicit) memory is contrasted with a collection of non conscious (non-declarative, implicit) memory abilities including skills and habits, priming, and simple conditioning. Recent evidence is reviewed indicating that declarative and non declarative forms of memory have different operating characteristics and depend on separate brain systems. A brain-systems framework for understanding memory phenomena is developed in light of lesion studies involving rats, monkeys, and humans, as well as recent studies with normal humans using the divided visual field technique, event-related potentials, and positron emission tomography (PET).

  11. Memories and NASA Spacecraft: A Description of Memories, Radiation Failure Modes, and System Design Considerations

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.; Ladbury, Ray; Oldhamm, Timothy

    2010-01-01

    As NASA has evolved it's usage of spaceflight computing, memory applications have followed as well. In this slide presentation, the history of NASA's memories from magnetic core and tape recorders to current semiconductor approaches is discussed. There is a brief description of current functional memory usage in NASA space systems followed by a description of potential radiation-induced failure modes along with considerations for reliable system design.

  12. Memory effects on stochastic resonance

    NASA Astrophysics Data System (ADS)

    Neiman, Alexander; Sung, Wokyung

    1996-02-01

    We study the phenomenon of stochastic resonance (SR) in a bistable system with internal colored noise. In this situation the system possesses time-dependent memory friction connected with noise via the fluctuation-dissipation theorem, so that in the absence of periodic driving the system approaches the thermodynamic equilibrium state. For this non-Markovian case we find that memory usually suppresses stochastic resonance. However, for a large memory time SR can be enhanced by the memory.

  13. The design and implementation of GML data management information system based on PostgreSQL

    NASA Astrophysics Data System (ADS)

    Zhang, Aiguo; Wu, Qunyong; Xu, Qifeng

    2008-10-01

    GML expresses geographic information in text, and it provides an extensible and standard way of spatial information encoding. At the present time, the management of GML data is in terms of document. By this way, the inquiry and update of GML data is inefficient, and it demands high memory when the document is comparatively large. In this respect, the paper put forward a data management of GML based on PostgreSQL. It designs four kinds of inquiries, which are inquiry of metadata, inquiry of geometry based on property, inquiry of property based on spatial information, and inquiry of spatial data based on location. At the same time, it designs and implements the visualization of the inquired WKT data.

  14. Application-Controlled Demand Paging for Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)

    1997-01-01

    In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.

  15. System for simultaneously loading program to master computer memory devices and corresponding slave computer memory devices

    NASA Technical Reports Server (NTRS)

    Hall, William A. (Inventor)

    1993-01-01

    A bus programmable slave module card for use in a computer control system is disclosed which comprises a master computer and one or more slave computer modules interfacing by means of a bus. Each slave module includes its own microprocessor, memory, and control program for acting as a single loop controller. The slave card includes a plurality of memory means (S1, S2...) corresponding to a like plurality of memory devices (C1, C2...) in the master computer, for each slave memory means its own communication lines connectable through the bus with memory communication lines of an associated memory device in the master computer, and a one-way electronic door which is switchable to either a closed condition or a one-way open condition. With the door closed, communication lines between master computer memory (C1, C2...) and slave memory (S1, S2...) are blocked. In the one-way open condition invention, the memory communication lines or each slave memory means (S1, S2...) connect with the memory communication lines of its associated memory device (C1, C2...) in the master computer, and the memory devices (C1, C2...) of the master computer and slave card are electrically parallel such that information seen by the master's memory is also seen by the slave's memory. The slave card is also connectable to a switch for electronically removing the slave microprocessor from the system. With the master computer and the slave card in programming mode relationship, and the slave microprocessor electronically removed from the system, loading a program in the memory devices (C1, C2...) of the master accomplishes a parallel loading into the memory devices (S1, S2...) of the slave.

  16. Survey State of the Art: Electrical Load Management Techniques and Equipment.

    DTIC Science & Technology

    1986-10-31

    automobiles and even appliances. Applications in the area of demand and energy management have been multifaceted, given the needs involved and rapid paybacks...copy of the programming to be reloaded into the controller at any time and by designing this module with erasable and reprogrammable memory, the...points and performs DDC programming is stored in (direct digital control) of output reprogrammable , permanent memory points. A RIM may accommodate up

  17. Primary Care-Based Memory Clinics: Expanding Capacity for Dementia Care.

    PubMed

    Lee, Linda; Hillier, Loretta M; Heckman, George; Gagnon, Micheline; Borrie, Michael J; Stolee, Paul; Harvey, David

    2014-09-01

    The implementation in Ontario of 15 primary-care-based interprofessional memory clinics represented a unique model of team-based case management aimed at increasing capacity for dementia care at the primary-care level. Each clinic tracked referrals; in a subset of clinics, charts were audited by geriatricians, clinic members were interviewed, and patients, caregivers, and referring physicians completed satisfaction surveys. Across all clinics, 582 patients were assessed, and 8.9 per cent were referred to a specialist. Patients and caregivers were very satisfied with the care received, as were referring family physicians, who reported increased capacity to manage dementia. Geriatricians' chart audits revealed a high level of agreement with diagnosis and management. This study demonstrated acceptability, feasibility, and preliminary effectiveness of the primary-care memory clinic model. Led by specially trained family physicians, it provided timely access to high-quality collaborative dementia care, impacting health service utilization by more-efficient use of scarce geriatric specialist resources.

  18. Stochastic memory: getting memory out of noise

    NASA Astrophysics Data System (ADS)

    Stotland, Alexander; di Ventra, Massimiliano

    2011-03-01

    Memory circuit elements, namely memristors, memcapacitors and meminductors, can store information without the need of a power source. These systems are generally defined in terms of deterministic equations of motion for the state variables that are responsible for memory. However, in real systems noise sources can never be eliminated completely. One would then expect noise to be detrimental for memory. Here, we show that under specific conditions on the noise intensity memory can actually be enhanced. We illustrate this phenomenon using a physical model of a memristor in which the addition of white noise into the state variable equation improves the memory and helps the operation of the system. We discuss under which conditions this effect can be realized experimentally, discuss its implications on existing memory systems discussed in the literature, and also analyze the effects of colored noise. Work supported in part by NSF.

  19. Collective input/output under memory constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yin; Chen, Yong; Zhuang, Yu

    2014-12-18

    Compared with current high-performance computing (HPC) systems, exascale systems are expected to have much less memory per node, which can significantly reduce necessary collective input/output (I/O) performance. In this study, we introduce a memory-conscious collective I/O strategy that takes into account memory capacity and bandwidth constraints. The new strategy restricts aggregation data traffic within disjointed subgroups, coordinates I/O accesses in intranode and internode layers, and determines I/O aggregators at run time considering memory consumption among processes. We have prototyped the design and evaluated it with commonly used benchmarks to verify its potential. The evaluation results demonstrate that this strategy holdsmore » promise in mitigating the memory pressure, alleviating the contention for memory bandwidth, and improving the I/O performance for projected extreme-scale systems. Given the importance of supporting increasingly data-intensive workloads and projected memory constraints on increasingly larger scale HPC systems, this new memory-conscious collective I/O can have a significant positive impact on scientific discovery productivity.« less

  20. Siemens, Philips megaproject to yield superchip in 5 years

    NASA Astrophysics Data System (ADS)

    1985-02-01

    The development of computer chips using complementary metal oxide semiconductor (CMOS) memory technology is described. The management planning and marketing strategy of the Philips and Siemens corporations with regard to the memory chip are discussed.

  1. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  2. Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A [Rochester, MN; Chen, Dong [Croton On Hudson, NY; Gooding, Thomas [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Parker, Jeff [Rochester, MN

    2012-02-07

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  3. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  4. Next Generation Mass Memory Architecture

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Stahle, M.; Lonsdorfer, U.; Binzer, N.

    2010-08-01

    Future Mass Memory units will have to cope with various demanding requirements driven by onboard instruments (optical and SAR) that generate a huge amount of data (>10TBit) at a data rate > 6 Gbps. For downlink data rates around 3 Gbps will be feasible using latest ka-band technology together with Variable Coding and Modulation (VCM) techniques. These high data rates and storage capacities need to be effectively managed. Therefore, data structures and data management functions have to be improved and adapted to existing standards like the Packet Utilisation Standard (PUS). In this paper we will present a highly modular and scalable architectural approach for mass memories in order to support a wide range of mission requirements.

  5. Design of on-board parallel computer on nano-satellite

    NASA Astrophysics Data System (ADS)

    You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li

    2007-11-01

    This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.

  6. A shared resource between declarative memory and motor memory.

    PubMed

    Keisler, Aysha; Shadmehr, Reza

    2010-11-03

    The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and nondeclarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/nondeclarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system.

  7. A shared resource between declarative memory and motor memory

    PubMed Central

    Keisler, Aysha; Shadmehr, Reza

    2010-01-01

    The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and non-declarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/non-declarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system. PMID:21048140

  8. Cognitive memory.

    PubMed

    Widrow, Bernard; Aragon, Juan Carlos

    2013-05-01

    Regarding the workings of the human mind, memory and pattern recognition seem to be intertwined. You generally do not have one without the other. Taking inspiration from life experience, a new form of computer memory has been devised. Certain conjectures about human memory are keys to the central idea. The design of a practical and useful "cognitive" memory system is contemplated, a memory system that may also serve as a model for many aspects of human memory. The new memory does not function like a computer memory where specific data is stored in specific numbered registers and retrieval is done by reading the contents of the specified memory register, or done by matching key words as with a document search. Incoming sensory data would be stored at the next available empty memory location, and indeed could be stored redundantly at several empty locations. The stored sensory data would neither have key words nor would it be located in known or specified memory locations. Sensory inputs concerning a single object or subject are stored together as patterns in a single "file folder" or "memory folder". When the contents of the folder are retrieved, sights, sounds, tactile feel, smell, etc., are obtained all at the same time. Retrieval would be initiated by a query or a prompt signal from a current set of sensory inputs or patterns. A search through the memory would be made to locate stored data that correlates with or relates to the prompt input. The search would be done by a retrieval system whose first stage makes use of autoassociative artificial neural networks and whose second stage relies on exhaustive search. Applications of cognitive memory systems have been made to visual aircraft identification, aircraft navigation, and human facial recognition. Concerning human memory, reasons are given why it is unlikely that long-term memory is stored in the synapses of the brain's neural networks. Reasons are given suggesting that long-term memory is stored in DNA or RNA. Neural networks are an important component of the human memory system, and their purpose is for information retrieval, not for information storage. The brain's neural networks are analog devices, subject to drift and unplanned change. Only with constant training is reliable action possible. Good training time is during sleep and while awake and making use of one's memory. A cognitive memory is a learning system. Learning involves storage of patterns or data in a cognitive memory. The learning process for cognitive memory is unsupervised, i.e. autonomous. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Imaging systems level consolidation of novel associate memories: A longitudinal neuroimaging study

    PubMed Central

    Smith, Jason F; Alexander, Gene E; Chen, Kewei; Husain, Fatima T; Kim, Jieun; Pajor, Nathan; Horwitz, Barry

    2010-01-01

    Previously, a standard theory of systems level memory consolidation was developed to describe how memory recall becomes independent of the medial temporal memory system. More recently, an extended consolidation theory was proposed that predicts seven changes in regional neural activity and inter-regional functional connectivity. Using longitudinal event related functional magnetic resonance imaging of an associate memory task, we simultaneously tested all predictions and additionally tested for consolidation related changes in recall of associate memories at a sub-trial temporal resolution, analyzing cue, delay and target periods of each trial separately. Results consistent with the theoretical predictions were observed though two inconsistent results were also obtained. In particular, while recall-related delay period activity decreased with consolidation as predicted, visual cue activity increased for consolidated memories. Though the extended theory of memory consolidation is largely supported by our study, these results suggest the extended theory needs further refinement and the medial temporal memory system has multiple, temporally distinct roles in associate memory recall. Neuroimaging analysis at a sub-trial temporal resolution, as used here, may further clarify the role of the hippocampal complex in memory consolidation. PMID:19948227

  10. Preliminary basic performance analysis of the Cedar multiprocessor memory system

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Turner, S.; Veidenbaum, A.; Wijshoff, H.

    1991-01-01

    Some preliminary basic results on the performance of the Cedar multiprocessor memory system are presented. Empirical results are presented and used to calibrate a memory system simulator which is then used to discuss the scalability of the system.

  11. Sustainable Remediation of Legacy Mine Drainage: A Case Study of the Flight 93 National Memorial.

    PubMed

    Emili, Lisa A; Pizarchik, Joseph; Mahan, Carolyn G

    2016-03-01

    Pollution from mining activities is a global environmental concern, not limited to areas of current resource extraction, but including a broader geographic area of historic (legacy) and abandoned mines. The pollution of surface waters from acid mine drainage is a persistent problem and requires a holistic and sustainable approach to addressing the spatial and temporal complexity of mining-specific problems. In this paper, we focus on the environmental, socio-economic, and legal challenges associated with the concurrent activities to remediate a coal mine site and to develop a national memorial following a catastrophic event. We provide a conceptual construct of a socio-ecological system defined at several spatial, temporal, and organizational scales and a critical synthesis of the technical and social learning processes necessary to achieving sustainable environmental remediation. Our case study is an example of a multi-disciplinary management approach, whereby collaborative interaction of stakeholders, the emergence of functional linkages for information exchange, and mediation led to scientifically informed decision making, creative management solutions, and ultimately environmental policy change.

  12. Storing and managing information artifacts collected by information analysts using a computing device

    DOEpatents

    Pike, William A; Riensche, Roderick M; Best, Daniel M; Roberts, Ian E; Whyatt, Marie V; Hart, Michelle L; Carr, Norman J; Thomas, James J

    2012-09-18

    Systems and computer-implemented processes for storage and management of information artifacts collected by information analysts using a computing device. The processes and systems can capture a sequence of interactive operation elements that are performed by the information analyst, who is collecting an information artifact from at least one of the plurality of software applications. The information artifact can then be stored together with the interactive operation elements as a snippet on a memory device, which is operably connected to the processor. The snippet comprises a view from an analysis application, data contained in the view, and the sequence of interactive operation elements stored as a provenance representation comprising operation element class, timestamp, and data object attributes for each interactive operation element in the sequence.

  13. Evolutionary Metal Oxide Clusters for Novel Applications: Toward High-Density Data Storage in Nonvolatile Memories.

    PubMed

    Chen, Xiaoli; Zhou, Ye; Roy, Vellaisamy A L; Han, Su-Ting

    2018-01-01

    Because of current fabrication limitations, miniaturizing nonvolatile memory devices for managing the explosive increase in big data is challenging. Molecular memories constitute a promising candidate for next-generation memories because their properties can be readily modulated through chemical synthesis. Moreover, these memories can be fabricated through mild solution processing, which can be easily scaled up. Among the various materials, polyoxometalate (POM) molecules have attracted considerable attention for use as novel data-storage nodes for nonvolatile memories. Here, an overview of recent advances in the development of POMs for nonvolatile memories is presented. The general background knowledge of the structure and property diversity of POMs is also summarized. Finally, the challenges and perspectives in the application of POMs in memories are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Computational and empirical simulations of selective memory impairments: Converging evidence for a single-system account of memory dissociations.

    PubMed

    Curtis, Evan T; Jamieson, Randall K

    2018-04-01

    Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.

  15. Supervisory control and diagnostics system for the mirror fusion test facility: overview and status 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.

    1981-01-01

    The Mirror Fusion Test Facility (MFTF) is a complex facility requiring a highly-computerized Supervisory Control and Diagnostics System (SCDS) to monitor and provide control over ten subsystems; three of which require true process control. SCDS will provide physicists with a method of studying machine and plasma behavior by acquiring and processing up to four megabytes of plasma diagnostic information every five minutes. A high degree of availability and throughput is provided by a distributed computer system (nine 32-bit minicomputers on shared memory). Data, distributed across SCDS, is managed by a high-bandwidth Distributed Database Management System. The MFTF operators' control roommore » consoles use color television monitors with touch sensitive screens; this is a totally new approach. The method of handling deviations to normal machine operation and how the operator should be notified and assisted in the resolution of problems has been studied and a system designed.« less

  16. Physicians' perceptions of capacity building for managing chronic disease in seniors using integrated interprofessional care models.

    PubMed

    Lee, Linda; Heckman, George; McKelvie, Robert; Jong, Philip; D'Elia, Teresa; Hillier, Loretta M

    2015-03-01

    To explore the barriers to and facilitators of adapting and expanding a primary care memory clinic model to integrate care of additional complex chronic geriatric conditions (heart failure, falls, chronic obstructive pulmonary disease, and frailty) into care processes with the goal of improving outcomes for seniors. Mixed-methods study using quantitative (questionnaires) and qualitative (interviews) methods. Ontario. Family physicians currently working in primary care memory clinic teams and supporting geriatric specialists. Family physicians currently working in memory clinic teams (n = 29) and supporting geriatric specialists(n = 9) were recruited as survey participants. Interviews were conducted with memory clinic lead physicians (n = 16).Statistical analysis was done to assess differences between family physician ratings and geriatric specialist ratings related to the capacity for managing complex chronic geriatric conditions, the role of interprofessional collaboration within primary care, and funding and staffing to support geriatric care. Results from both study methods were compared to identify common findings. Results indicate overall support for expanding the memory clinic model to integrate care for other complex conditions. However, the current primary care structure is challenged to support optimal management of patients with multiple comorbidities, particularly as related to limited funding and staffing resources. Structured training, interprofessional teams, and an active role of geriatric specialists within primary care were identified as important facilitators. The memory clinic model, as applied to other complex chronic geriatric conditions, has the potential to build capacity for high-quality primary care, improve health outcomes,promote efficient use of health care resources, and reduce healthcare costs.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hsien-Hsin S

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniquesmore » and system software for achieving a robust, secure, and reliable computing system toward our goal.« less

  18. Through the Immune Looking Glass: A Model for Brain Memory Strategies

    PubMed Central

    Sánchez-Ramón, Silvia; Faure, Florence

    2016-01-01

    The immune system (IS) and the central nervous system (CNS) are complex cognitive networks involved in defining the identity (self) of the individual through recognition and memory processes that enable one to anticipate responses to stimuli. Brain memory has traditionally been classified as either implicit or explicit on psychological and anatomical grounds, with reminiscences of the evolutionarily-based innate-adaptive IS responses. Beyond the multineuronal networks of the CNS, we propose a theoretical model of brain memory integrating the CNS as a whole. This is achieved by analogical reasoning between the operational rules of recognition and memory processes in both systems, coupled to an evolutionary analysis. In this new model, the hippocampus is no longer specifically ascribed to explicit memory but rather it both becomes part of the innate (implicit) memory system and tightly controls the explicit memory system. Alike the antigen presenting cells for the IS, the hippocampus would integrate transient and pseudo-specific (i.e., danger-fear) memories and would drive the formation of long-term and highly specific or explicit memories (i.e., the taste of the Proust’s madeleine cake) by the more complex and recent, evolutionarily speaking, neocortex. Experimental and clinical evidence is provided to support the model. We believe that the singularity of this model’s approximation could help to gain a better understanding of the mechanisms operating in brain memory strategies from a large-scale network perspective. PMID:26869886

  19. Interactions between metabolic, reward and cognitive processes in appetite control: Implications for novel weight management therapies

    PubMed Central

    Higgs, Suzanne; Spetter, Maartje S; Thomas, Jason M; Rotshtein, Pia; Lee, Michelle; Hallschmid, Manfred; Dourish, Colin T

    2017-01-01

    Traditional models of appetite control have emphasised the role of parallel homeostatic and hedonic systems, but more recently the distinction between independent homeostatic and hedonic systems has been abandoned in favour of a framework that emphasises the cross talk between the neurochemical substrates of the two systems. In addition, evidence has emerged more recently, that higher level cognitive functions such as learning, memory and attention play an important role in everyday appetite control and that homeostatic signals also play a role in cognition. Here, we review this evidence and present a comprehensive model of the control of appetite that integrates cognitive, homeostatic and reward mechanisms. We discuss the implications of this model for understanding the factors that may contribute to disordered patterns of eating and suggest opportunities for developing more effective treatment approaches for eating disorders and weight management. PMID:29072515

  20. Interactions between metabolic, reward and cognitive processes in appetite control: Implications for novel weight management therapies.

    PubMed

    Higgs, Suzanne; Spetter, Maartje S; Thomas, Jason M; Rotshtein, Pia; Lee, Michelle; Hallschmid, Manfred; Dourish, Colin T

    2017-11-01

    Traditional models of appetite control have emphasised the role of parallel homeostatic and hedonic systems, but more recently the distinction between independent homeostatic and hedonic systems has been abandoned in favour of a framework that emphasises the cross talk between the neurochemical substrates of the two systems. In addition, evidence has emerged more recently, that higher level cognitive functions such as learning, memory and attention play an important role in everyday appetite control and that homeostatic signals also play a role in cognition. Here, we review this evidence and present a comprehensive model of the control of appetite that integrates cognitive, homeostatic and reward mechanisms. We discuss the implications of this model for understanding the factors that may contribute to disordered patterns of eating and suggest opportunities for developing more effective treatment approaches for eating disorders and weight management.

  1. Tactical Operations Analysis Support Facility.

    DTIC Science & Technology

    1981-05-01

    Punch/Reader 2 DMC-11AR DDCMP Micro Processor 2 DMC-11DA Network Link Line Unit 2 DL-11E Async Serial Line Interface 4 Intel IN-1670 448K Words MOS Memory...86 5.3 VIRTUAL PROCESSORS - VAX-11/750 ........................... 89 5.4 A RELATIONAL DATA MANAGEMENT SYSTEM - ORACLE...Central Processing Unit (CPU) is a 16 bit processor for high-speed, real time applications, and for large multi-user, multi- task, time shared

  2. Indications and Warning Analysis Management System IWAMS. A Design Study

    DTIC Science & Technology

    1980-03-01

    First, we must understand the process of warning analysis; we must develop an -;adequate functional model. In the present research we have divided ...and changeable). (In subsequent discussions, considerable attention will be focused on these issues.) -77-7-12Z ’m,77+-U 21 WARNING ANALYSIS MODEL...have charted limitations in man’s memory, attention span, reasoning capability and other cognitive functions. These limitations considerably affect man’s

  3. It Is Time to Take Memory Training Seriously

    ERIC Educational Resources Information Center

    Buckley, Sue

    2008-01-01

    For more than 25 years people have known that children and adults with Down syndrome have a specific impairments in working memory. Within the working memory system, they have particular difficulty with the verbal short-term memory part of the system. However, memory training may become more popular as recent work with both children with Down…

  4. Studies and applications of NiTi shape memory alloys in the medical field in China.

    PubMed

    Dai, K; Chu, Y

    1996-01-01

    The biomedical study of NiTi shape memory alloys has been undertaken in China since 1978. A series of stimulating corrosion tests, histological observations, toxicity tests, carcinogenicity tests, trace nickel elements analysis and a number of clinical trials have been conducted. The results showed that the NiTi shape memory alloy is a good biomaterial with good biocompatibility and no obvious local tissue reaction, carcinogenesis or erosion of implants were found experimentally or clinically. In 1981, on the basis of fundamental studies, a shape memory staple was used for the first time inside the human body. Subsequently, various shape memory devices were designed and applied clinically for internal fixation of fractures, spine surgery, endoprostheses, gynaecological and craniofacial surgery. Since 1990, a series of internal stents have been developed for the management of biliary, tracheal and esophageal strictures and urethrostenosis as well as vascular obturator for tumour management. Several thousand cases have been treated and had a 1-10 year follow-up and good clinical results with a rather low complication rate were obtained.

  5. [Selective alteration of the declarative memory systems in patients treated with a high number of electroconvulsive therapy sessions].

    PubMed

    Rami-González, L; Boget-Llucià, T; Bernardo, M; Marcos, T; Cañizares-Alejos, S; Penadés, R; Portella, M J; Castelví, M; Raspall, T; Salamero, M

    The reversible electrochemical effects of electroconvulsive therapy (ECT) on specific areas of the brain enable the neuroanatomical bases of some cognitive functions to be studied. In research carried out on memory systems, a selective alteration of the declarative ones has been observed after treatment with ECT. Little work has been done to explore the differential alteration of the memory subsystems in patients with a high number of ECT sessions. AIM. To study the declarative and non declarative memory system in psychiatric patients submitted to maintenance ECT treatment, with a high number of previous ECT sessions. 20 patients submitted to treatment with ECT (10 diagnosed as having depression and 10 with schizophrenia) and 20 controls, who were paired by age, sex and psychopathological diagnosis. For the evaluation of the declarative memory system, the Wechsler Memory Scale (WMS) logical memory test was used. The Hanoi Tower procedural test was employed to evaluate the non declarative system. Patients treated with ECT performed worse in the WMS logical memory test, but this was only significant in patients diagnosed as suffering from depression. No significant differences were observed in the Hanoi Tower test. A selective alteration of the declarative systems was observed in patients who had been treated with a high number of ECT sessions, while the non declarative memory systems remain unaffected.

  6. Heat switch technology for cryogenic thermal management

    NASA Astrophysics Data System (ADS)

    Shu, Q. S.; Demko, J. A.; E Fesmire, J.

    2017-12-01

    Systematic review is given of development of novel heat switches at cryogenic temperatures that alternatively provide high thermal connection or ideal thermal isolation to the cold mass. These cryogenic heat switches are widely applied in a variety of unique superconducting systems and critical space applications. The following types of heat switch devices are discussed: 1) magnetic levitation suspension, 2) shape memory alloys, 3) differential thermal expansion, 4) helium or hydrogen gap-gap, 5) superconducting, 6) piezoelectric, 7) cryogenic diode, 8) magneto-resistive, and 9) mechanical demountable connections. Advantages and limitations of different cryogenic heat switches are examined along with the outlook for future thermal management solutions in materials and cryogenic designs.

  7. The relation between receptive grammar and procedural, declarative, and working memory in specific language impairment.

    PubMed

    Conti-Ramsden, Gina; Ullman, Michael T; Lum, Jarrad A G

    2015-01-01

    What memory systems underlie grammar in children, and do these differ between typically developing (TD) children and children with specific language impairment (SLI)? Whilst there is substantial evidence linking certain memory deficits to the language problems in children with SLI, few studies have investigated multiple memory systems simultaneously, examining not only possible memory deficits but also memory abilities that may play a compensatory role. This study examined the extent to which procedural, declarative, and working memory abilities predict receptive grammar in 45 primary school aged children with SLI (30 males, 15 females) and 46 TD children (30 males, 16 females), both on average 9;10 years of age. Regression analyses probed measures of all three memory systems simultaneously as potential predictors of receptive grammar. The model was significant, explaining 51.6% of the variance. There was a significant main effect of learning in procedural memory and a significant group × procedural learning interaction. Further investigation of the interaction revealed that procedural learning predicted grammar in TD but not in children with SLI. Indeed, procedural learning was the only predictor of grammar in TD. In contrast, only learning in declarative memory significantly predicted grammar in SLI. Thus, different memory systems are associated with receptive grammar abilities in children with SLI and their TD peers. This study is, to our knowledge, the first to demonstrate a significant group by memory system interaction in predicting grammar in children with SLI and their TD peers. In line with Ullman's Declarative/Procedural model of language and procedural deficit hypothesis of SLI, variability in understanding sentences of varying grammatical complexity appears to be associated with variability in procedural memory abilities in TD children, but with declarative memory, as an apparent compensatory mechanism, in children with SLI.

  8. POLARIS: Agent-based modeling framework development and implementation for integrated travel demand and network and operations simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auld, Joshua; Hope, Michael; Ley, Hubert

    This paper discusses the development of an agent-based modelling software development kit, and the implementation and validation of a model using it that integrates dynamic simulation of travel demand, network supply and network operations. A description is given of the core utilities in the kit: a parallel discrete event engine, interprocess exchange engine, and memory allocator, as well as a number of ancillary utilities: visualization library, database IO library, and scenario manager. The overall framework emphasizes the design goals of: generality, code agility, and high performance. This framework allows the modeling of several aspects of transportation system that are typicallymore » done with separate stand-alone software applications, in a high-performance and extensible manner. The issue of integrating such models as dynamic traffic assignment and disaggregate demand models has been a long standing issue for transportation modelers. The integrated approach shows a possible way to resolve this difficulty. The simulation model built from the POLARIS framework is a single, shared-memory process for handling all aspects of the integrated urban simulation. The resulting gains in computational efficiency and performance allow planning models to be extended to include previously separate aspects of the urban system, enhancing the utility of such models from the planning perspective. Initial tests with case studies involving traffic management center impacts on various network events such as accidents, congestion and weather events, show the potential of the system.« less

  9. The neural basis of implicit learning and memory: a review of neuropsychological and neuroimaging research.

    PubMed

    Reber, Paul J

    2013-08-01

    Memory systems research has typically described the different types of long-term memory in the brain as either declarative versus non-declarative or implicit versus explicit. These descriptions reflect the difference between declarative, conscious, and explicit memory that is dependent on the medial temporal lobe (MTL) memory system, and all other expressions of learning and memory. The other type of memory is generally defined by an absence: either the lack of dependence on the MTL memory system (nondeclarative) or the lack of conscious awareness of the information acquired (implicit). However, definition by absence is inherently underspecified and leaves open questions of how this type of memory operates, its neural basis, and how it differs from explicit, declarative memory. Drawing on a variety of studies of implicit learning that have attempted to identify the neural correlates of implicit learning using functional neuroimaging and neuropsychology, a theory of implicit memory is presented that describes it as a form of general plasticity within processing networks that adaptively improve function via experience. Under this model, implicit memory will not appear as a single, coherent, alternative memory system but will instead be manifested as a principle of improvement from experience based on widespread mechanisms of cortical plasticity. The implications of this characterization for understanding the role of implicit learning in complex cognitive processes and the effects of interactions between types of memory will be discussed for examples within and outside the psychology laboratory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Altered Intrinsic Hippocmapus Declarative Memory Network and Its Association with Impulsivity in Abstinent Heroin Dependent Subjects

    PubMed Central

    Zhai, Tian-Ye; Shao, Yong-Cong; Xie, Chun-Ming; Ye, En-Mao; Zou, Feng; Fu, Li-Ping; Li, Wen-Jun; Chen, Gang; Chen, Guang-Yu; Zhang, Zheng-Guo; Li, Shi-Jiang; Yang, Zheng

    2014-01-01

    Converging evidence suggests that addiction can be considered a disease of aberrant learning and memory with impulsive decision-making. In the past decades, numerous studies have demonstrated that drug addiction is involved in multiple memory systems such as classical conditioned drug memory, instrumental learning memory and the habitual learning memory. However, most of these studies have focused on the contributions of non-declarative memory, and declarative memory has largely been neglected in the research of addiction. Based on a recent finding that hippocampus, as a core functioning region of declarative memory, was proved biased the decision-making process based on past experiences by spreading associated reward values throughout memory. Our present study focused on the hippocampus. By utilizing seed-based network analysis on the resting-state functional MRI datasets with the seed hippocampus we tested how the intrinsic hippocampal memory network altered towards drug addiction, and examined how the functional connectivity strength within the altered hippocampal network correlated with behavioral index ‘impulsivity’. Our results demonstrated that HD group showed enhanced coherence between hippocampus which represents declarative memory system and non-declarative rewardguided learning memory system, and also showed attenuated intrinsic functional link between hippocampus and top-down control system, compared to the CN group. This alteration was furthered found to have behavioral significance over the behavioral index ‘impulsivity’ measured with Barratt Impulsiveness Scale (BIS). These results provide insights into the mechanism of declarative memory underlying the impulsive behavior in drug addiction. PMID:25008351

  11. Systems consolidation revisited, but not revised: The promise and limits of optogenetics in the study of memory.

    PubMed

    Hardt, Oliver; Nadel, Lynn

    2017-12-05

    Episodic memories (in humans) and event-like memories (in non-human animals) require the hippocampus for some time after acquisition, but at remote points seem to depend more on cortical areas instead. Systems consolidation refers to the process that promotes this reorganization of memory. Various theoretical frameworks accounting for this process have been proposed, but clear evidence favoring one or another of these positions has been lacking. Addressing this issue, a recent study deployed some of the most advanced neurobiological technologies - optogenetics and calcium imaging - and provided high resolution, precise observations regarding brain systems involved in recent and remote contextual fear memories. We critically review these findings within their historical context and conclude that they do not resolve the debate concerning systems consolidation. This is because the relevant question concerning the quality of memory at recent and remote time points has not been answered: Does the memory reorganization taking place during systems consolidation result in changes to the content of memory? Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The dynamic nature of systems consolidation: Stress during learning as a switch guiding the rate of the hippocampal dependency and memory quality.

    PubMed

    Pedraza, Lizeth K; Sierra, Rodrigo O; Boos, Flávia Z; Haubrich, Josué; Quillfeldt, Jorge A; Alvares, Lucas de Oliveira

    2016-03-01

    Memory fades over time, becoming more schematic or abstract. The loss of contextual detail in memory may reflect a time-dependent change in the brain structures supporting memory. It has been well established that contextual fear memory relies on the hippocampus for expression shortly after learning, but it becomes hippocampus-independent at a later time point, a process called systems consolidation. This time-dependent process correlates with the loss of memory precision. Here, we investigated whether training intensity predicts the gradual decay of hippocampal dependency to retrieve memory, and the quality of the contextual memory representation over time. We have found that training intensity modulates the progressive decay of hippocampal dependency and memory precision. Strong training intensity accelerates systems consolidation and memory generalization in a remarkable timeframe match. The mechanisms underpinning such process are triggered by glucocorticoid and noradrenaline released during training. These results suggest that the stress levels during emotional learning act as a switch, determining the fate of memory quality. Moderate stress will create a detailed memory, whereas a highly stressful training will develop a generic gist-like memory. © 2015 Wiley Periodicals, Inc.

  13. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  14. Multilevel radiative thermal memory realized by the hysteretic metal-insulator transition of vanadium dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, Kota, E-mail: kotaito@mosk.tytlabs.co.jp; Nishikawa, Kazutaka; Iizuka, Hideo

    Thermal information processing is attracting much interest as an analog of electronic computing. We experimentally demonstrated a radiative thermal memory utilizing a phase change material. The hysteretic metal-insulator transition of vanadium dioxide (VO{sub 2}) allows us to obtain a multilevel memory. We developed a Preisach model to explain the hysteretic radiative heat transfer between a VO{sub 2} film and a fused quartz substrate. The transient response of our memory predicted by the Preisach model agrees well with the measured response. Our multilevel thermal memory paves the way for thermal information processing as well as contactless thermal management.

  15. Incorporating institutions and collective action into a sociohydrological model of flood resilience

    NASA Astrophysics Data System (ADS)

    Yu, David J.; Sangwan, Nikhil; Sung, Kyungmin; Chen, Xi; Merwade, Venkatesh

    2017-02-01

    Stylized sociohydrological models have mainly used social memory aspects such as community awareness or sensitivity to connect hydrologic change and social response. However, social memory alone does not satisfactorily capture the details of how human behavior is translated into collective action for water resources governance. Nor is it the only social mechanism by which the two-way feedbacks of sociohydrology can be operationalized. This study contributes toward bridging of this gap by developing a sociohydrological model of a flood resilience that includes two additional components: (1) institutions for collective action, and (2) connections to an external economic system. Motivated by the case of community-managed flood protection systems (polders) in coastal Bangladesh, we use the model to understand critical general features that affect long-term resilience of human-flood systems. Our findings suggest that occasional adversity can enhance long-term resilience. Allowing some hydrological variability to enter into the polder can increase its adaptive capacity for resilience through the preservation of social norm for collective action. Further, there are potential trade-offs associated with optimization of flood resistance through structural measures. By reducing sensitivity to floods, the system may become more fragile under the double impact of floods and economic change.

  16. Explicit pre-training instruction does not improve implicit perceptual-motor sequence learning

    PubMed Central

    Sanchez, Daniel J.; Reber, Paul J.

    2012-01-01

    Memory systems theory argues for separate neural systems supporting implicit and explicit memory in the human brain. Neuropsychological studies support this dissociation, but empirical studies of cognitively healthy participants generally observe that both kinds of memory are acquired to at least some extent, even in implicit learning tasks. A key question is whether this observation reflects parallel intact memory systems or an integrated representation of memory in healthy participants. Learning of complex tasks in which both explicit instruction and practice is used depends on both kinds of memory, and how these systems interact will be an important component of the learning process. Theories that posit an integrated, or single, memory system for both types of memory predict that explicit instruction should contribute directly to strengthening task knowledge. In contrast, if the two types of memory are independent and acquired in parallel, explicit knowledge should have no direct impact and may serve in a “scaffolding” role in complex learning. Using an implicit perceptual-motor sequence learning task, the effect of explicit pre-training instruction on skill learning and performance was assessed. Explicit pre-training instruction led to robust explicit knowledge, but sequence learning did not benefit from the contribution of pre-training sequence memorization. The lack of an instruction benefit suggests that during skill learning, implicit and explicit memory operate independently. While healthy participants will generally accrue parallel implicit and explicit knowledge in complex tasks, these types of information appear to be separately represented in the human brain consistent with multiple memory systems theory. PMID:23280147

  17. Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques

    ERIC Educational Resources Information Center

    Ullman, Michael T.; Lovelett, Jarrett T.

    2018-01-01

    The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…

  18. Postoperative pain impairs subsequent performance on a spatial memory task via effects on N-methyl-D-aspartate receptor in aged rats.

    PubMed

    Chi, Haidong; Kawano, Takashi; Tamura, Takahiko; Iwata, Hideki; Takahashi, Yasuhiro; Eguchi, Satoru; Yamazaki, Fumimoto; Kumagai, Naoko; Yokoyama, Masataka

    2013-12-18

    Pain may be associated with postoperative cognitive dysfunction (POCD); however, this relationship remains under investigated. Therefore, we examined the impact of postoperative pain on cognitive functions in aged animals. Rats were allocated to the following groups: control (C), 1.2 % isoflurane for 2 hours alone (I), I with laparotomy (IL), IL with analgesia using local ropivacaine (IL+R), and IL with analgesia using systemic morphine (IL+M). Pain was assessed by rat grimace scale (RGS). Spatial memory was evaluated using a radial maze from postoperative days (POD) 3 to 14. NMDA receptor (NR) 2 subunits in hippocampus were measured by ELISA. Finally, effects of memantine, a low-affinity uncompetitive N-methyl-d-aspartate (NMDA) receptor antagonist, on postoperative cognitive performance were tested. Postoperative RGS was increased in Group IL, but not in other groups. The number of memory errors in Group I were comparable to that in Group C, whereas errors in Group IL were increased. Importantly, in Group IL+R and IL+M, cognitive impairment was not found. The memory errors were positively correlated with the levels of NMDA receptor 2 subunits in hippocampus. Prophylactic treatment with memantine could prevent the development of memory deficits observed in Group IL without an analgesic effect. Postoperative pain contributes to the development of memory deficits after anesthesia and surgery via up-regulation of hippocampal NMDA receptors. Our findings suggest that postoperative pain management may be important for the prevention of POCD in elderly patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Optical read/write memory system components

    NASA Technical Reports Server (NTRS)

    Kozma, A.

    1972-01-01

    The optical components of a breadboard holographic read/write memory system have been fabricated and the parameters specified of the major system components: (1) a laser system; (2) an x-y beam deflector; (3) a block data composer; (4) the read/write memory material; (5) an output detector array; and (6) the electronics to drive, synchronize, and control all system components. The objectives of the investigation were divided into three concurrent phases: (1) to supply and fabricate the major components according to the previously established specifications; (2) to prepare computer programs to simulate the entire holographic memory system so that a designer can balance the requirements on the various components; and (3) to conduct a development program to optimize the combined recording and reconstruction process of the high density holographic memory system.

  20. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    DOEpatents

    Ohmacht, Martin

    2017-08-15

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  1. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    DOEpatents

    Ohmacht, Martin

    2014-09-09

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  2. Multiplexer/Demultiplexer Loading Tool (MDMLT)

    NASA Technical Reports Server (NTRS)

    Brewer, Lenox Allen; Hale, Elizabeth; Martella, Robert; Gyorfi, Ryan

    2012-01-01

    The purpose of the MDMLT is to improve the reliability and speed of loading multiplexers/demultiplexers (MDMs) in the Software Development and Integration Laboratory (SDIL) by automating the configuration management (CM) of the loads in the MDMs, automating the loading procedure, and providing the capability to load multiple or all MDMs concurrently. This loading may be accomplished in parallel, or single MDMs (remote). The MDMLT is a Web-based tool that is capable of loading the entire International Space Station (ISS) MDM configuration in parallel. It is able to load Flight Equivalent Units (FEUs), enhanced, standard, and prototype MDMs as well as both EEPROM (Electrically Erasable Programmable Read-Only Memory) and SSMMU (Solid State Mass Memory Unit) (MASS Memory). This software has extensive configuration management to track loading history, and the performance improvement means of loading the entire ISS MDM configuration of 49 MDMs in approximately 30 minutes, as opposed to 36 hours, which is what it took previously utilizing the flight method of S-Band uplink. The laptop version recently added to the MDMLT suite allows remote lab loading with the CM of information entered into a common database when it is reconnected to the network. This allows the program to reconfigure the test rigs quickly between shifts, allowing the lab to support a variety of onboard configurations during a single day, based on upcoming or current missions. The MDMLT Computer Software Configuration Item (CSCI) supports a Web-based command and control interface to the user. An interface to the SDIL File Transfer Protocol (FTP) server is supported to import Integrated Flight Loads (IFLs) and Internal Product Release Notes (IPRNs) into the database. An interface to the Monitor and Control System (MCS) is supported to control the power state, and to enable or disable the debug port of the MDMs to be loaded. Two direct interfaces to the MDM are supported: a serial interface (debug port) to receive MDM memory dump data and the calculated checksum, and the Small Computer System Interface (SCSI) to transfer load files to MDMs with hard disks. File transfer from the MDM Loading Tool to EEPROM within the MDM is performed via the MILSTD- 1553 bus, making use of the Real- Time Input/Output Processors (RTIOP) when using the rig-based MDMLT, and via a bus box when using the laptop MDMLT. The bus box is a cost-effective alternative to PC-1553 cards for the laptop. It is noted that this system can be modified and adapted to any avionic laboratory for spacecraft computer loading, ship avionics, or aircraft avionics where multiple configurations and strong configuration management of software/firmware loads are required.

  3. A cognitive task analysis of information management strategies in a computerized provider order entry environment.

    PubMed

    Weir, Charlene R; Nebeker, Jonathan J R; Hicken, Bret L; Campo, Rebecca; Drews, Frank; Lebar, Beth

    2007-01-01

    Computerized Provider Order Entry (CPOE) with electronic documentation, and computerized decision support dramatically changes the information environment of the practicing clinician. Prior work patterns based on paper, verbal exchange, and manual methods are replaced with automated, computerized, and potentially less flexible systems. The objective of this study is to explore the information management strategies that clinicians use in the process of adapting to a CPOE system using cognitive task analysis techniques. Observation and semi-structured interviews were conducted with 88 primary-care clinicians at 10 Veterans Administration Medical Centers. Interviews were taped, transcribed, and extensively analyzed to identify key information management goals, strategies, and tasks. Tasks were aggregated into groups, common components across tasks were clarified, and underlying goals and strategies identified. Nearly half of the identified tasks were not fully supported by the available technology. Six core components of tasks were identified. Four meta-cognitive information management goals emerged: 1) Relevance Screening; 2) Ensuring Accuracy; 3) Minimizing memory load; and 4) Negotiating Responsibility. Strategies used to support these goals are presented. Users develop a wide array of information management strategies that allow them to successfully adapt to new technology. Supporting the ability of users to develop adaptive strategies to support meta-cognitive goals is a key component of a successful system.

  4. Groundwater-fed irrigation impacts spatially distributed temporal scaling behavior of the natural system: a spatio-temporal framework for understanding water management impacts

    NASA Astrophysics Data System (ADS)

    Condon, Laura E.; Maxwell, Reed M.

    2014-03-01

    Regional scale water management analysis increasingly relies on integrated modeling tools. Much recent work has focused on groundwater-surface water interactions and feedbacks. However, to our knowledge, no study has explicitly considered impacts of management operations on the temporal dynamics of the natural system. Here, we simulate twenty years of hourly moisture dependent, groundwater-fed irrigation using a three-dimensional, fully integrated, hydrologic model (ParFlow-CLM). Results highlight interconnections between irrigation demand, groundwater oscillation frequency and latent heat flux variability not previously demonstrated. Additionally, the three-dimensional model used allows for novel consideration of spatial patterns in temporal dynamics. Latent heat flux and water table depth both display spatial organization in temporal scaling, an important finding given the spatial homogeneity and weak scaling observed in atmospheric forcings. Pumping and irrigation amplify high frequency (sub-annual) variability while attenuating low frequency (inter-annual) variability. Irrigation also intensifies scaling within irrigated areas, essentially increasing temporal memory in both the surface and the subsurface. These findings demonstrate management impacts that extend beyond traditional water balance considerations to the fundamental behavior of the system itself. This is an important step to better understanding groundwater’s role as a buffer for natural variability and the impact that water management has on this capacity.

  5. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  6. Systems Reconsolidation Reveals a Selective Role for the Anterior Cingulate Cortex in Generalized Contextual Fear Memory Expression

    PubMed Central

    Einarsson, Einar Ö; Pors, Jennifer; Nader, Karim

    2015-01-01

    After acquisition, hippocampus-dependent memories undergo a systems consolidation process, during which they become independent of the hippocampus and dependent on the anterior cingulate cortex (ACC) for memory expression. However, consolidated remote memories can become transiently hippocampus-dependent again following memory reactivation. How this systems reconsolidation affects the role of the ACC in remote memory expression is not known. Using contextual fear conditioning, we show that the expression of 30-day-old remote memory can transiently be supported by either the ACC or the dorsal hippocampus following memory reactivation, and that the ACC specifically mediates expression of remote generalized contextual fear memory. We found that suppression of neural activity in the ACC with the AMPA/kainate receptor antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX) impaired the expression of remote, but not recent, contextual fear memory. Fear expression was not affected by this treatment if preceded by memory reactivation 6 h earlier, nor was it affected by suppression of neural activity in the dorsal hippocampus with the GABA-receptor agonist muscimol. However, simultaneous targeting of both the ACC and the dorsal hippocampus 6 h after memory reactivation disrupted contextual fear memory expression. Second, we observed that expression of a 30-day-old generalized contextual fear memory in a novel context was not affected by memory reactivation 6 h earlier. However, intra-ACC CNQX infusion before testing impaired contextual fear expression in the novel context, but not the original training context. Together, these data suggest that although the dorsal hippocampus may be recruited during systems reconsolidation, the ACC remains necessary for the expression of generalized contextual fear memory. PMID:25091528

  7. Systems reconsolidation reveals a selective role for the anterior cingulate cortex in generalized contextual fear memory expression.

    PubMed

    Einarsson, Einar Ö; Pors, Jennifer; Nader, Karim

    2015-01-01

    After acquisition, hippocampus-dependent memories undergo a systems consolidation process, during which they become independent of the hippocampus and dependent on the anterior cingulate cortex (ACC) for memory expression. However, consolidated remote memories can become transiently hippocampus-dependent again following memory reactivation. How this systems reconsolidation affects the role of the ACC in remote memory expression is not known. Using contextual fear conditioning, we show that the expression of 30-day-old remote memory can transiently be supported by either the ACC or the dorsal hippocampus following memory reactivation, and that the ACC specifically mediates expression of remote generalized contextual fear memory. We found that suppression of neural activity in the ACC with the AMPA/kainate receptor antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX) impaired the expression of remote, but not recent, contextual fear memory. Fear expression was not affected by this treatment if preceded by memory reactivation 6 h earlier, nor was it affected by suppression of neural activity in the dorsal hippocampus with the GABA-receptor agonist muscimol. However, simultaneous targeting of both the ACC and the dorsal hippocampus 6 h after memory reactivation disrupted contextual fear memory expression. Second, we observed that expression of a 30-day-old generalized contextual fear memory in a novel context was not affected by memory reactivation 6 h earlier. However, intra-ACC CNQX infusion before testing impaired contextual fear expression in the novel context, but not the original training context. Together, these data suggest that although the dorsal hippocampus may be recruited during systems reconsolidation, the ACC remains necessary for the expression of generalized contextual fear memory.

  8. Documenting the Intangible and the Use of "collective Memory" as a Tool for Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Ekim, Z.; Güney, E. E.; Vatan, M.

    2017-08-01

    Increasing immigration activities due to globalized economies, political conflicts, wars and disasters of the recent years not only had a serious impact on the tangible heritage fabric, but also on the intangible values of heritage sites. With the challenges of managing drastic changes the field of heritage is faced with in mind, this paper proposes a documentation strategy that utilizes "collective memory" as a tool for risk mitigation of culturally diverse sites. Intangible and tangible values of two cases studies, from Turkey and Canada, are studied in a comparative way to create a methodology for the use of collected data on "collective memory and identity" in risk mitigation and managing change as a living value of the site.

  9. The ILLIAC IV memory system: Current status and future possibilities

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    The future needs of researchers who will use the Illiac were examined and the requirements they will place on the memory system were evaluated. Various alternatives to replacing critical memory components were considered with regard to cost, risk, system impact, software requirements, and implementation schedules. The current system, its performance and status, and the limitations it places on possible enhancements are discussed as well as the planned enhancements to the Illiac processor. After a brief technology survey, different implementations are presented for each system memory component. Three different memory systems are proposed to meet the identified needs of the Illiac user community. These three alternatives differ considerably with respect to storage capacity and accessing capabilities, but they all offer significant improvements over the current system. The proposed systems and their relative merits are analyzed.

  10. Designing a VMEbus FDDI adapter card

    NASA Astrophysics Data System (ADS)

    Venkataraman, Raman

    1992-03-01

    This paper presents a system architecture for a VMEbus FDDI adapter card containing a node core, FDDI block, frame buffer memory and system interface unit. Most of the functions of the PHY and MAC layers of FDDI are implemented with National's FDDI chip set and the SMT implementation is simplified with a low cost microcontroller. The factors that influence the system bus bandwidth utilization and FDDI bandwidth utilization are the data path and frame buffer memory architecture. The VRAM based frame buffer memory has two sections - - LLC frame memory and SMT frame memory. Each section with an independent serial access memory (SAM) port provides an independent access after the initial data transfer cycle on the main port and hence, the throughput is maximized on each port of the memory. The SAM port simplifies the system bus master DMA design and the VMEbus interface can be designed with low-cost off-the-shelf interface chips.

  11. Bubble memory module for spacecraft application

    NASA Technical Reports Server (NTRS)

    Hayes, P. J.; Looney, K. T.; Nichols, C. D.

    1985-01-01

    Bubble domain technology offers an all-solid-state alternative for data storage in onboard data systems. A versatile modular bubble memory concept was developed. The key module is the bubble memory module which contains all of the storage devices and circuitry for accessing these devices. This report documents the bubble memory module design and preliminary hardware designs aimed at memory module functional demonstration with available commercial bubble devices. The system architecture provides simultaneous operation of bubble devices to attain high data rates. Banks of bubble devices are accessed by a given bubble controller to minimize controller parts. A power strobing technique is discussed which could minimize the average system power dissipation. A fast initialization method using EEPROM (electrically erasable, programmable read-only memory) devices promotes fast access. Noise and crosstalk problems and implementations to minimize these are discussed. Flight memory systems which incorporate the concepts and techniques of this work could now be developed for applications.

  12. The focus of attention is similar to other memory systems rather than uniquely different

    PubMed Central

    Beaudry, Olivia; Neath, Ian; Surprenant, Aimée M.; Tehan, Gerald

    2014-01-01

    According to some current theories, the focus of attention (FOA), part of working memory, represents items in a privileged state that is more accessible than items stored in other memory systems. One line of evidence supporting the distinction between the FOA and other memory systems is the finding that items in the FOA are immune to proactive interference (when something learned earlier impairs the ability to remember something learned more recently). The FOA, then, is held to be unique: it is the only memory system that is not susceptible to proactive interference. We review the literature used to support this claim, and although there are many studies in which proactive interference was not observed, we found more studies in which it was observed. We conclude that the FOA is not immune to proactive interference: items in the FOA are susceptible to proactive interference just like items in every other memory system. And, just as in all other memory systems, it is how the items are represented and processed that plays a critical role in determining whether proactive interference will be observed. PMID:24574996

  13. Technical support for digital systems technology development. Task order 1: ISP contention analysis and control

    NASA Technical Reports Server (NTRS)

    Stehle, Roy H.; Ogier, Richard G.

    1993-01-01

    Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.

  14. Context controls access to working and reference memory in the pigeon (Columba livia).

    PubMed

    Roberts, William A; Macpherson, Krista; Strang, Caroline

    2016-01-01

    The interaction between working and reference memory systems was examined under conditions in which salient contextual cues were presented during memory retrieval. Ambient colored lights (red or green) bathed the operant chamber during the presentation of comparison stimuli in delayed matching-to-sample training (working memory) and during the presentation of the comparison stimuli as S+ and S- cues in discrimination training (reference memory). Strong competition between memory systems appeared when the same contextual cue appeared during working and reference memory training. When different contextual cues were used, however, working memory was completely protected from reference memory interference. © 2016 Society for the Experimental Analysis of Behavior.

  15. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  16. Optical memories in digital computing

    NASA Technical Reports Server (NTRS)

    Alford, C. O.; Gaylord, T. K.

    1979-01-01

    High capacity optical memories with relatively-high data-transfer rate and multiport simultaneous access capability may serve as basis for new computer architectures. Several computer structures that might profitably use memories are: a) simultaneous record-access system, b) simultaneously-shared memory computer system, and c) parallel digital processing structure.

  17. Divergent short- and long-term effects of acute stress in object recognition memory are mediated by endogenous opioid system activation.

    PubMed

    Nava-Mesa, Mauricio O; Lamprea, Marisol R; Múnera, Alejandro

    2013-11-01

    Acute stress induces short-term object recognition memory impairment and elicits endogenous opioid system activation. The aim of this study was thus to evaluate whether opiate system activation mediates the acute stress-induced object recognition memory changes. Adult male Wistar rats were trained in an object recognition task designed to test both short- and long-term memory. Subjects were randomly assigned to receive an intraperitoneal injection of saline, 1 mg/kg naltrexone or 3 mg/kg naltrexone, four and a half hours before the sample trial. Five minutes after the injection, half the subjects were submitted to movement restraint during four hours while the other half remained in their home cages. Non-stressed subjects receiving saline (control) performed adequately during the short-term memory test, while stressed subjects receiving saline displayed impaired performance. Naltrexone prevented such deleterious effect, in spite of the fact that it had no intrinsic effect on short-term object recognition memory. Stressed subjects receiving saline and non-stressed subjects receiving naltrexone performed adequately during the long-term memory test; however, control subjects as well as stressed subjects receiving a high dose of naltrexone performed poorly. Control subjects' dissociated performance during both memory tests suggests that the short-term memory test induced a retroactive interference effect mediated through light opioid system activation; such effect was prevented either by low dose naltrexone administration or by strongly activating the opioid system through acute stress. Both short-term memory retrieval impairment and long-term memory improvement observed in stressed subjects may have been mediated through strong opioid system activation, since they were prevented by high dose naltrexone administration. Therefore, the activation of the opioid system plays a dual modulating role in object recognition memory. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Impact of Recent Hardware and Software Trends on High Performance Transaction Processing and Analytics

    NASA Astrophysics Data System (ADS)

    Mohan, C.

    In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.

  19. Stability of discrete memory states to stochastic fluctuations in neuronal systems

    PubMed Central

    Miller, Paul; Wang, Xiao-Jing

    2014-01-01

    Noise can degrade memories by causing transitions from one memory state to another. For any biological memory system to be useful, the time scale of such noise-induced transitions must be much longer than the required duration for memory retention. Using biophysically-realistic modeling, we consider two types of memory in the brain: short-term memories maintained by reverberating neuronal activity for a few seconds, and long-term memories maintained by a molecular switch for years. Both systems require persistence of (neuronal or molecular) activity self-sustained by an autocatalytic process and, we argue, that both have limited memory lifetimes because of significant fluctuations. We will first discuss a strongly recurrent cortical network model endowed with feedback loops, for short-term memory. Fluctuations are due to highly irregular spike firing, a salient characteristic of cortical neurons. Then, we will analyze a model for long-term memory, based on an autophosphorylation mechanism of calcium/calmodulin-dependent protein kinase II (CaMKII) molecules. There, fluctuations arise from the fact that there are only a small number of CaMKII molecules at each postsynaptic density (putative synaptic memory unit). Our results are twofold. First, we demonstrate analytically and computationally the exponential dependence of stability on the number of neurons in a self-excitatory network, and on the number of CaMKII proteins in a molecular switch. Second, for each of the two systems, we implement graded memory consisting of a group of bistable switches. For the neuronal network we report interesting ramping temporal dynamics as a result of sequentially switching an increasing number of discrete, bistable, units. The general observation of an exponential increase in memory stability with the system size leads to a trade-off between the robustness of memories (which increases with the size of each bistable unit) and the total amount of information storage (which decreases with increasing unit size), which may be optimized in the brain through biological evolution. PMID:16822041

  20. A microkernel design for component-based parallel numerical software systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less

  1. Shape memory polymer medical device

    DOEpatents

    Maitland, Duncan [Pleasant Hill, CA; Benett, William J [Livermore, CA; Bearinger, Jane P [Livermore, CA; Wilson, Thomas S [San Leandro, CA; Small, IV, Ward; Schumann, Daniel L [Concord, CA; Jensen, Wayne A [Livermore, CA; Ortega, Jason M [Pacifica, CA; Marion, III, John E.; Loge, Jeffrey M [Stockton, CA

    2010-06-29

    A system for removing matter from a conduit. The system includes the steps of passing a transport vehicle and a shape memory polymer material through the conduit, transmitting energy to the shape memory polymer material for moving the shape memory polymer material from a first shape to a second and different shape, and withdrawing the transport vehicle and the shape memory polymer material through the conduit carrying the matter.

  2. Altered intrinsic hippocmapus declarative memory network and its association with impulsivity in abstinent heroin dependent subjects.

    PubMed

    Zhai, Tian-Ye; Shao, Yong-Cong; Xie, Chun-Ming; Ye, En-Mao; Zou, Feng; Fu, Li-Ping; Li, Wen-Jun; Chen, Gang; Chen, Guang-Yu; Zhang, Zheng-Guo; Li, Shi-Jiang; Yang, Zheng

    2014-10-01

    Converging evidence suggests that addiction can be considered a disease of aberrant learning and memory with impulsive decision-making. In the past decades, numerous studies have demonstrated that drug addiction is involved in multiple memory systems such as classical conditioned drug memory, instrumental learning memory and the habitual learning memory. However, most of these studies have focused on the contributions of non-declarative memory, and declarative memory has largely been neglected in the research of addiction. Based on a recent finding that hippocampus, as a core functioning region of declarative memory, was proved biased the decision-making process based on past experiences by spreading associated reward values throughout memory. Our present study focused on the hippocampus. By utilizing seed-based network analysis on the resting-state functional MRI datasets with the seed hippocampus we tested how the intrinsic hippocampal memory network altered toward drug addiction, and examined how the functional connectivity strength within the altered hippocampal network correlated with behavioral index 'impulsivity'. Our results demonstrated that HD group showed enhanced coherence between hippocampus which represents declarative memory system and non-declarative reward-guided learning memory system, and also showed attenuated intrinsic functional link between hippocampus and top-down control system, compared to the CN group. This alteration was furthered found to have behavioral significance over the behavioral index 'impulsivity' measured with Barratt Impulsiveness Scale (BIS). These results provide insights into the mechanism of declarative memory underlying the impulsive behavior in drug addiction. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Subthalamic stimulation differentially modulates declarative and nondeclarative memory.

    PubMed

    Hälbig, Thomas D; Gruber, Doreen; Kopp, Ute A; Scherer, Peter; Schneider, Gerd-Helge; Trottenberg, Thomas; Arnold, Guy; Kupsch, Andreas

    2004-03-01

    Declarative memory has been reported to rely on the medial temporal lobe system, whereas non-declarative memory depends on basal ganglia structures. We investigated the functional role of the subthalamic nucleus (STN), a structure closely connected with the basal ganglia for both types of memory. Via deep brain high frequency stimulation (DBS) we manipulated neural activity of the STN in humans. We found that DBS-STN differentially modulated memory performance: declarative memory was impaired, whereas non-declarative memory was improved in the presence of STN-DBS indicating a specific role of the STN in the activation of memory systems. Copyright 2004 Lippincott Williams & Wilkins

  4. Static power reduction for midpoint-terminated busses

    DOEpatents

    Coteus, Paul W [Yorktown Heights, NY; Takken, Todd [Brewster, NY

    2011-01-18

    A memory system is disclosed which is comprised of a memory controller and addressable memory devices such as DRAMs. The invention provides a programmable register to control the high vs. low drive state of each bit of a memory system address and control bus during periods of bus inactivity. In this way, termination voltage supply current can be minimized, while permitting selected bus bits to be driven to a required state. This minimizes termination power dissipation while not affecting memory system performance. The technique can be extended to work for other high-speed busses as well.

  5. Changing disturbance regimes, ecological memory, and forest resilience

    USGS Publications Warehouse

    Johnstone, Jill F.; Allen, Craig D.; Franklin, Jerry F.; Frelich, Lee E.; Harvey, Brian J.; Higuera, Philip E.; Mack, Michelle C.; Meentemeyer, Ross K.; Metz, Margaret R.; Perry, George LW; Schoennagel, Tania; Turner, Monica G.

    2016-01-01

    Ecological memory is central to how ecosystems respond to disturbance and is maintained by two types of legacies – information and material. Species life-history traits represent an adaptive response to disturbance and are an information legacy; in contrast, the abiotic and biotic structures (such as seeds or nutrients) produced by single disturbance events are material legacies. Disturbance characteristics that support or maintain these legacies enhance ecological resilience and maintain a “safe operating space” for ecosystem recovery. However, legacies can be lost or diminished as disturbance regimes and environmental conditions change, generating a “resilience debt” that manifests only after the system is disturbed. Strong effects of ecological memory on post-disturbance dynamics imply that contingencies (effects that cannot be predicted with certainty) of individual disturbances, interactions among disturbances, and climate variability combine to affect ecosystem resilience. We illustrate these concepts and introduce a novel ecosystem resilience framework with examples of forest disturbances, primarily from North America. Identifying legacies that support resilience in a particular ecosystem can help scientists and resource managers anticipate when disturbances may trigger abrupt shifts in forest ecosystems, and when forests are likely to be resilient.

  6. Galileo spacecraft power management and distribution system

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.; Smith, R. L.

    1990-01-01

    The Galileo PMAD (power management and distribution system) is described, and the design drivers that established the final as-built hardware are discussed. The spacecraft is powered by two general-purpose heat-source-radioisotope thermoelectric generators. Power bus regulation is provided by a shunt regulator. Galileo PMAD distributes a 570-W beginning of mission (BOM) power source to a user complement of some 137 load elements. Extensive use of pyrotechnics requires two pyro switching subassemblies. They initiate 148 squibs which operate the 47 pyro devices on the spacecraft. Detection and correction of faults in the Galileo PMAD is an autonomous feature dictated by requirements for long life and reliability in the absence of ground-based support. Volatile computer memories in the spacecraft command and data system and attitude control system require a continuous source of backup power during all anticipated power bus fault scenarios. Power for the Jupiter Probe is conditioned, isolated, and controlled by a Probe interface subassembly. Flight performance of the spacecraft and the PMAD has been successful to date, with no major anomalies.

  7. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  8. Physicians’ perceptions of capacity building for managing chronic disease in seniors using integrated interprofessional care models

    PubMed Central

    Lee, Linda; Heckman, George; McKelvie, Robert; Jong, Philip; D’Elia, Teresa; Hillier, Loretta M.

    2015-01-01

    Abstract Objective To explore the barriers to and facilitators of adapting and expanding a primary care memory clinic model to integrate care of additional complex chronic geriatric conditions (heart failure, falls, chronic obstructive pulmonary disease, and frailty) into care processes with the goal of improving outcomes for seniors. Design Mixed-methods study using quantitative (questionnaires) and qualitative (interviews) methods. Setting Ontario. Participants Family physicians currently working in primary care memory clinic teams and supporting geriatric specialists. Methods Family physicians currently working in memory clinic teams (n = 29) and supporting geriatric specialists (n = 9) were recruited as survey participants. Interviews were conducted with memory clinic lead physicians (n = 16). Statistical analysis was done to assess differences between family physician ratings and geriatric specialist ratings related to the capacity for managing complex chronic geriatric conditions, the role of interprofessional collaboration within primary care, and funding and staffing to support geriatric care. Results from both study methods were compared to identify common findings. Main findings Results indicate overall support for expanding the memory clinic model to integrate care for other complex conditions. However, the current primary care structure is challenged to support optimal management of patients with multiple comorbidities, particularly as related to limited funding and staffing resources. Structured training, interprofessional teams, and an active role of geriatric specialists within primary care were identified as important facilitators. Conclusion The memory clinic model, as applied to other complex chronic geriatric conditions, has the potential to build capacity for high-quality primary care, improve health outcomes, promote efficient use of health care resources, and reduce health care costs. PMID:25932482

  9. Informatics in dental education: a horizon of opportunity.

    PubMed

    Abbey, L M

    1989-11-01

    Computers have presented society with the largest array of opportunities since the printing press. More specifically in dental education they represent the path to freedom from the memory-based curriculum. Computers allow us to be constantly in touch with the entire scope of knowledge necessary for decision making in every aspect of the process of preparing young men and women to practice dentistry. No longer is it necessary to spend the energy or time previously used to memorize facts, test for retention of facts or be concerned with remembering facts when dealing with our patients. Modern information management systems can assume that task allowing dentists to concentrate on understanding, skill, judgement and wisdom while helping patients deal with their problems within a health care system that is simultaneously baffling in its complexity and overflowing with options. This paper presents a summary of the choices facing dental educators as computers continue to afford us the freedom to look differently at teaching, research and practice. The discussion will elaborate some of the ways dental educators must think differently about the educational process in order to utilize fully the power of computers in curriculum development and tracking, integration of basic and clinical teaching, problem solving, patient management, record keeping and research. Some alternative strategies will be discussed that may facilitate the transition from the memory-based to the computer-based curriculum and practice.

  10. Memorial Hermann: high reliability from board to bedside.

    PubMed

    Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire

    2013-06-01

    In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.

  11. Information management in DNA replication modeled by directional, stochastic chains with memory

    NASA Astrophysics Data System (ADS)

    Arias-Gonzalez, J. Ricardo

    2016-11-01

    Stochastic chains represent a key variety of phenomena in many branches of science within the context of information theory and thermodynamics. They are typically approached by a sequence of independent events or by a memoryless Markov process. Stochastic chains are of special significance to molecular biology, where genes are conveyed by linear polymers made up of molecular subunits and transferred from DNA to proteins by specialized molecular motors in the presence of errors. Here, we demonstrate that when memory is introduced, the statistics of the chain depends on the mechanism by which objects or symbols are assembled, even in the slow dynamics limit wherein friction can be neglected. To analyze these systems, we introduce a sequence-dependent partition function, investigate its properties, and compare it to the standard normalization defined by the statistical physics of ensembles. We then apply this theory to characterize the enzyme-mediated information transfer involved in DNA replication under the real, non-equilibrium conditions, reproducing measured error rates and explaining the typical 100-fold increase in fidelity that is experimentally found when proofreading and edition take place. Our model further predicts that approximately 1 kT has to be consumed to elevate fidelity in one order of magnitude. We anticipate that our results are necessary to interpret configurational order and information management in many molecular systems within biophysics, materials science, communication, and engineering.

  12. Short-term memory to long-term memory transition in a nanoscale memristor.

    PubMed

    Chang, Ting; Jo, Sung-Hyun; Lu, Wei

    2011-09-27

    "Memory" is an essential building block in learning and decision-making in biological systems. Unlike modern semiconductor memory devices, needless to say, human memory is by no means eternal. Yet, forgetfulness is not always a disadvantage since it releases memory storage for more important or more frequently accessed pieces of information and is thought to be necessary for individuals to adapt to new environments. Eventually, only memories that are of significance are transformed from short-term memory into long-term memory through repeated stimulation. In this study, we show experimentally that the retention loss in a nanoscale memristor device bears striking resemblance to memory loss in biological systems. By stimulating the memristor with repeated voltage pulses, we observe an effect analogous to memory transition in biological systems with much improved retention time accompanied by additional structural changes in the memristor. We verify that not only the shape or the total number of stimuli is influential, but also the time interval between stimulation pulses (i.e., the stimulation rate) plays a crucial role in determining the effectiveness of the transition. The memory enhancement and transition of the memristor device was explained from the microscopic picture of impurity redistribution and can be qualitatively described by the same equations governing biological memories. © 2011 American Chemical Society

  13. Cholinergic manipulations bidirectionally regulate object memory destabilization

    PubMed Central

    Stiver, Mikaela L.; Jacklin, Derek L.; Mitchnick, Krista A.; Vicic, Nevena; Carlin, Justine; O'Hara, Matthew

    2015-01-01

    Consolidated memories can become destabilized and open to modification upon retrieval. Destabilization is most reliably prompted when novel information is present during memory reactivation. We hypothesized that the neurotransmitter acetylcholine (ACh) plays an important role in novelty-induced memory destabilization because of its established involvement in new learning. Accordingly, we investigated the effects of cholinergic manipulations in rats using an object recognition paradigm that requires reactivation novelty to destabilize object memories. The muscarinic receptor antagonist scopolamine, systemically or infused directly into the perirhinal cortex, blocked this novelty-induced memory destabilization. Conversely, systemic oxotremorine or carbachol, muscarinic receptor agonists, administered systemically or intraperirhinally, respectively, mimicked the destabilizing effect of novel information during reactivation. These bidirectional effects suggest a crucial influence of ACh on memory destabilization and the updating functions of reconsolidation. This is a hitherto unappreciated mnemonic role for ACh with implications for its potential involvement in cognitive flexibility and the dynamic process of long-term memory storage. PMID:25776038

  14. Conditional load and store in a shared memory

    DOEpatents

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  15. System for loading executable code into volatile memory in a downhole tool

    DOEpatents

    Hall, David R.; Bartholomew, David B.; Johnson, Monte L.

    2007-09-25

    A system for loading an executable code into volatile memory in a downhole tool string component comprises a surface control unit comprising executable code. An integrated downhole network comprises data transmission elements in communication with the surface control unit and the volatile memory. The executable code, stored in the surface control unit, is not permanently stored in the downhole tool string component. In a preferred embodiment of the present invention, the downhole tool string component comprises boot memory. In another embodiment, the executable code is an operating system executable code. Preferably, the volatile memory comprises random access memory (RAM). A method for loading executable code to volatile memory in a downhole tool string component comprises sending the code from the surface control unit to a processor in the downhole tool string component over the network. A central processing unit writes the executable code in the volatile memory.

  16. Memory function and supportive technology

    PubMed Central

    Charness, Neil; Best, Ryan; Souders, Dustin

    2013-01-01

    Episodic and working memory processes show pronounced age-related decline, with other memory processes such as semantic, procedural, and metamemory less affected. Older adults tend to complain the most about prospective and retrospective memory failures. We introduce a framework for deciding how to mitigate memory decline using augmentation and substitution and discuss techniques that change the user, through mnemonics training, and change the tool or environment, by providing environmental support. We provide examples of low-tech and high-tech memory supports and discuss constraints on the utility of high-tech systems including effectiveness of devices, attitudes toward memory aids, and reliability of systems. PMID:24379752

  17. A Single-System Model Predicts Recognition Memory and Repetition Priming in Amnesia

    PubMed Central

    Kessels, Roy P.C.; Wester, Arie J.; Shanks, David R.

    2014-01-01

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. PMID:25122896

  18. Design and feasibility of a memory intervention with focus on self-management for cognitive impairment in epilepsy.

    PubMed

    Caller, Tracie A; Secore, Karen L; Ferguson, Robert J; Roth, Robert M; Alexandre, Faith P; Henegan, Patricia L; Harrington, Jessica J; Jobst, Barbara C

    2015-03-01

    The aim of this study was to assess the feasibility of a self-management intervention targeting cognitive dysfunction to improve quality of life and reduce memory-related disability in adults with epilepsy. The intervention incorporates (1) education on cognitive function in epilepsy, (2) self-awareness training, (3) compensatory strategies, and (4) application of these strategies in day-to-day life using problem-solving therapy. In addition to the behavioral modification, formal working memory training was conducted by utilizing a commercially available program in a subgroup of patients. Our findings suggest that a self-management intervention targeting cognitive dysfunction was feasible for delivery to a rural population with epilepsy, with 13 of 16 enrolled participants completing the 8-session program. Qualitative data indicate high satisfaction and subjective improvement in cognitive functioning in day-to-day life. These findings provide support for further evaluation of the efficacy of this intervention through a randomized controlled trial. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Method and structure for an improved data reformatting procedure

    DOEpatents

    Chatterjee, Siddhartha [Yorktown Heights, NY; Gunnels, John A [Brewster, NY; Gustavson, Fred Gehrung [Briarcliff Manor, NY

    2009-06-30

    A method (and structure) of managing memory in which a low-level mechanism is executed to signal, in a sequence of instructions generated at a higher level, that at least a portion of a contiguous area of memory is permitted to be overwritten.

  20. Storage and executive processes in the frontal lobes.

    PubMed

    Smith, E E; Jonides, J

    1999-03-12

    The human frontal cortex helps mediate working memory, a system that is used for temporary storage and manipulation of information and that is involved in many higher cognitive functions. Working memory includes two components: short-term storage (on the order of seconds) and executive processes that operate on the contents of storage. Recently, these two components have been investigated in functional neuroimaging studies. Studies of storage indicate that different frontal regions are activated for different kinds of information: storage for verbal materials activates Broca's area and left-hemisphere supplementary and premotor areas; storage of spatial information activates the right-hemisphere premotor cortex; and storage of object information activates other areas of the prefrontal cortex. Two of the fundamental executive processes are selective attention and task management. Both processes activate the anterior cingulate and dorsolateral prefrontal cortex.

  1. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  2. Improving Memory for Optimization and Learning in Dynamic Environments

    DTIC Science & Technology

    2011-07-01

    algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in

  3. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  4. Computational Models of Human Performance: Validation of Memory and Procedural Representation in Advanced Air/Ground Simulation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Labacqz, J. Victor (Technical Monitor)

    1997-01-01

    The Man-Machine Interaction Design and Analysis System (MIDAS) under joint U.S. Army and NASA cooperative is intended to assist designers of complex human/automation systems in successfully incorporating human performance capabilities and limitations into decision and action support systems. MIDAS is a computational representation of multiple human operators, selected perceptual, cognitive, and physical functions of those operators, and the physical/functional representation of the equipment with which they operate. MIDAS has been used as an integrated predictive framework for the investigation of human/machine systems, particularly in situations with high demands on the operators. We have extended the human performance models to include representation of both human operators and intelligent aiding systems in flight management, and air traffic service. The focus of this development is to predict human performance in response to aiding system developed to identify aircraft conflict and to assist in the shared authority for resolution. The demands of this application requires representation of many intelligent agents sharing world-models, coordinating action/intention, and cooperative scheduling of goals and action in an somewhat unpredictable world of operations. In recent applications to airborne systems development, MIDAS has demonstrated an ability to predict flight crew decision-making and procedural behavior when interacting with automated flight management systems and Air Traffic Control. In this paper, we describe two enhancements to MIDAS. The first involves the addition of working memory in the form of an articulatory buffer for verbal communication protocols and a visuo-spatial buffer for communications via digital datalink. The second enhancement is a representation of multiple operators working as a team. This enhanced model was used to predict the performance of human flight crews and their level of compliance with commercial aviation communication procedures. We show how the data produced by MIDAS compares with flight crew performance data from full mission simulations. Finally, we discuss the use of these features to study communication issues connected with aircraft-based separation assurance.

  5. Engagement of the PFC in Consolidation and Recall of Recent Spatial Memory

    ERIC Educational Resources Information Center

    Leon, Wanda C.; Bruno, Martin A.; Allard, Simon; Nader, Karim; Cuello, A. Claudio

    2010-01-01

    The standard model of system consolidation proposes that memories are initially hippocampus dependent and become hippocampus independent over time. Previous studies have demonstrated the involvement of the medial prefrontal cortex (mPFC) in the retrieval of remote memories. The transformations required to make a memory undergo system's…

  6. Genetic Disruption of the Core Circadian Clock Impairs Hippocampus-Dependent Memory

    ERIC Educational Resources Information Center

    Wardlaw, Sarah M.; Phan, Trongha X.; Saraf, Amit; Chen, Xuanmao; Storm, Daniel R.

    2014-01-01

    Perturbing the circadian system by electrolytically lesioning the suprachiasmatic nucleus (SCN) or varying the environmental light:dark schedule impairs memory, suggesting that memory depends on the circadian system. We used a genetic approach to evaluate the role of the molecular clock in memory. Bmal1[superscript -/-] mice, which are arrhythmic…

  7. The importance of ecological memory for trophic rewilding as an ecosystem restoration approach.

    PubMed

    Schweiger, Andreas H; Boulangeat, Isabelle; Conradi, Timo; Davis, Matt; Svenning, Jens-Christian

    2018-06-06

    Increasing human pressure on strongly defaunated ecosystems is characteristic of the Anthropocene and calls for proactive restoration approaches that promote self-sustaining, functioning ecosystems. However, the suitability of novel restoration concepts such as trophic rewilding is still under discussion given fragmentary empirical data and limited theory development. Here, we develop a theoretical framework that integrates the concept of 'ecological memory' into trophic rewilding. The ecological memory of an ecosystem is defined as an ecosystem's accumulated abiotic and biotic material and information legacies from past dynamics. By summarising existing knowledge about the ecological effects of megafauna extinction and rewilding across a large range of spatial and temporal scales, we identify two key drivers of ecosystem responses to trophic rewilding: (i) impact potential of (re)introduced megafauna, and (ii) ecological memory characterising the focal ecosystem. The impact potential of (re)introduced megafauna species can be estimated from species properties such as lifetime per capita engineering capacity, population density, home range size and niche overlap with resident species. The importance of ecological memory characterising the focal ecosystem depends on (i) the absolute time since megafauna loss, (ii) the speed of abiotic and biotic turnover, (iii) the strength of species interactions characterising the focal ecosystem, and (iv) the compensatory capacity of surrounding source ecosystems. These properties related to the focal and surrounding ecosystems mediate material and information legacies (its ecological memory) and modulate the net ecosystem impact of (re)introduced megafauna species. We provide practical advice about how to quantify all these properties while highlighting the strong link between ecological memory and historically contingent ecosystem trajectories. With this newly established ecological memory-rewilding framework, we hope to guide future empirical studies that investigate the ecological effects of trophic rewilding and other ecosystem-restoration approaches. The proposed integrated conceptual framework should also assist managers and decision makers to anticipate the possible trajectories of ecosystem dynamics after restoration actions and to weigh plausible alternatives. This will help practitioners to develop adaptive management strategies for trophic rewilding that could facilitate sustainable management of functioning ecosystems in an increasingly human-dominated world. © 2018 Cambridge Philosophical Society.

  8. Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli

    PubMed Central

    Brady, Timothy F.; Störmer, Viola S.; Alvarez, George A.

    2016-01-01

    Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli—colors and orientations—is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up,” revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge. PMID:27325767

  9. Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli.

    PubMed

    Brady, Timothy F; Störmer, Viola S; Alvarez, George A

    2016-07-05

    Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli-colors and orientations-is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up," revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.

  10. Making the case that episodic recollection is attributable to operations occurring at retrieval rather than to content stored in a dedicated subsystem of long-term memory.

    PubMed

    Klein, Stanley B

    2013-01-01

    Episodic memory often is conceptualized as a uniquely human system of long-term memory that makes available knowledge accompanied by the temporal and spatial context in which that knowledge was acquired. Retrieval from episodic memory entails a form of first-person subjectivity called autonoetic consciousness that provides a sense that a recollection was something that took place in the experiencer's personal past. In this paper I expand on this definition of episodic memory. Specifically, I suggest that (1) the core features assumed unique to episodic memory are shared by semantic memory, (2) episodic memory cannot be fully understood unless one appreciates that episodic recollection requires the coordinated function of a number of distinct, yet interacting, "enabling" systems. Although these systems-ownership, self, subjective temporality, and agency-are not traditionally viewed as memorial in nature, each is necessary for episodic recollection and jointly they may be sufficient, and (3) the type of subjective awareness provided by episodic recollection (autonoetic) is relational rather than intrinsic-i.e., it can be lost in certain patient populations, thus rendering episodic memory content indistinguishable from the content of semantic long-term memory.

  11. The nature of the semantic/episodic memory distinction: A missing piece of the "working through" process.

    PubMed

    Klein, Stanley B; Markowitsch, Hans J

    2015-01-01

    The relations between the semantic and episodic-autobiographical memory systems are more complex than described in the target article. We argue that understanding the noetic/autonoetic distinction provides critical insights into the foundation of the delineation between the two memory systems. Clarity with respect to the criteria for classification of these two systems, and the evolving conceptualization of episodic memory, can further neuroscientifically informed therapeutic approaches.

  12. A Proposal of 3-dimensional Self-organizing Memory and Its Application to Knowledge Extraction from Natural Language

    NASA Astrophysics Data System (ADS)

    Sakakibara, Kai; Hagiwara, Masafumi

    In this paper, we propose a 3-dimensional self-organizing memory and describe its application to knowledge extraction from natural language. First, the proposed system extracts a relation between words by JUMAN (morpheme analysis system) and KNP (syntax analysis system), and stores it in short-term memory. In the short-term memory, the relations are attenuated with the passage of processing. However, the relations with high frequency of appearance are stored in the long-term memory without attenuation. The relations in the long-term memory are placed to the proposed 3-dimensional self-organizing memory. We used a new learning algorithm called ``Potential Firing'' in the learning phase. In the recall phase, the proposed system recalls relational knowledge from the learned knowledge based on the input sentence. We used a new recall algorithm called ``Waterfall Recall'' in the recall phase. We added a function to respond to questions in natural language with ``yes/no'' in order to confirm the validity of proposed system by evaluating the quantity of correct answers.

  13. System-Level Integration of Mass Memory

    NASA Technical Reports Server (NTRS)

    Cox, Brian; Mellstrom, Jeffrey; Wysocky, Terry

    2008-01-01

    A report discusses integrating multiple memory modules on the high-speed serial interconnect (IEEE 1393) that is used by a spacecraft?s inter-module communications in order to ease data congestion and provide for a scalable, strong, flexible system that can meet new system-level mass memory requirements.

  14. High efficiency coherent optical memory with warm rubidium vapour

    PubMed Central

    Hosseini, M.; Sparkes, B.M.; Campbell, G.; Lam, P.K.; Buchler, B.C.

    2011-01-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory. PMID:21285952

  15. High efficiency coherent optical memory with warm rubidium vapour.

    PubMed

    Hosseini, M; Sparkes, B M; Campbell, G; Lam, P K; Buchler, B C

    2011-02-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory.

  16. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  17. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  18. System Assessment of a High Power 3-U CubeSat

    NASA Technical Reports Server (NTRS)

    Shaw, Katie

    2016-01-01

    The Advanced eLectrical Bus (ALBus) CubeSat project is a technology demonstration mission of a 3-UCubeSat with an advanced, digitally controlled electrical power system capability and novel use of Shape Memory Alloy (SMA) technology for reliable deployable solar array mechanisms. The objective of the project is to, through an on orbit demonstration, advance the state of power management and distribution (PMAD) capabilities to enable future missions requiring higher power, flexible and reliable power systems. The goals of the mission include demonstration of: 100 Watt distribution to a target electrical load, efficient battery charging in the orbital environment, flexible power system distribution interfaces, adaptation of power system control on orbit, and reliable deployment of solar arrays and antennas utilizing re-settable SMA mechanisms. The power distribution function of the ALBus PMAD system is unique in the total power to target load capability of 100 W, the flexibility to support centralized or point-to-load regulation and ability to respond to fast transient power requirements. Power will be distributed from batteries at 14.8 V, 6.5 A to provide 100 W of power directly to a load. The deployable solar arrays utilize NASA Glenn Research Center superelastic and activated Nitinol(Nickel-Titanium alloy) Shape Memory Alloy (SMA) technology for hinges and a retention and release mechanism. The deployable solar array hinge design features utilization of the SMA material properties for dual purpose. The hinge uses the shape memory properties of the SMA to provide the spring force to deploy the arrays. The electrical conductivity properties of the SMA also enables the design to provide clean conduits for power transfer from the deployable arrays to the power management system. This eliminates the need for electrical harnesses between the arrays and the PMAD system in the ALBus system design. The uniqueness of the SMA retention and release mechanism design is the ability to reset the mechanism, allowing functional tests of the mechanisms prior to flight with no degradation of performance. The project is currently in preparation at the NASA Glenn Research Center for a launch in late calendar year of 2017. The 100 Watt power distribution and dual purpose, re-settable SMA mechanisms introduced several system level challenges due to the physical constraints in volume, mass and surface area of 3-U CubeSats. Several trade studies and design cycles have been completed to develop a system which supports the project objectives. This paper is a report on the results of the system level trade studies and assessments. The results include assessment of options for thermal control of 100 Watts of power dissipation, data from system analyses and engineering development tests, limitations of the 3-U system and extensibility to larger scale CubeSat missions.

  19. Stress and multiple memory systems: from 'thinking' to 'doing'.

    PubMed

    Schwabe, Lars; Wolf, Oliver T

    2013-02-01

    Although it has been known for decades that stress influences memory performance, it was only recently shown that stress may alter the contribution of multiple, anatomically and functionally distinct memory systems to behavior. Here, we review recent animal and human studies demonstrating that stress promotes a shift from flexible 'cognitive' to rather rigid 'habit' memory systems and discuss, based on recent neuroimaging data in humans, the underlying brain mechanisms. We argue that, despite being generally adaptive, this stress-induced shift towards 'habit' memory may, in vulnerable individuals, be a risk factor for psychopathology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Multi-range force sensors utilizing shape memory alloys

    DOEpatents

    Varma, Venugopal K.

    2003-04-15

    The present invention provides a multi-range force sensor comprising a load cell made of a shape memory alloy, a strain sensing system, a temperature modulating system, and a temperature monitoring system. The ability of the force sensor to measure contact forces in multiple ranges is effected by the change in temperature of the shape memory alloy. The heating and cooling system functions to place the shape memory alloy of the load cell in either a low temperature, low strength phase for measuring small contact forces, or a high temperature, high strength phase for measuring large contact forces. Once the load cell is in the desired phase, the strain sensing system is utilized to obtain the applied contact force. The temperature monitoring system is utilized to ensure that the shape memory alloy is in one phase or the other.

  1. Multiple Transient Memories in Experiments on Sheared Non-Brownian Suspensions

    NASA Astrophysics Data System (ADS)

    Paulsen, Joseph D.; Keim, Nathan C.; Nagel, Sidney R.

    2014-08-01

    A system with multiple transient memories can remember a set of inputs but subsequently forgets almost all of them, even as they are continually applied. If noise is added, the system can store all memories indefinitely. The phenomenon has recently been predicted for cyclically sheared non-Brownian suspensions. Here we present experiments on such suspensions, finding behavior consistent with multiple transient memories and showing how memories can be stabilized by noise.

  2. The potential of multi-port optical memories in digital computing

    NASA Technical Reports Server (NTRS)

    Alford, C. O.; Gaylord, T. K.

    1975-01-01

    A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.

  3. Seeing the Wood for the Trees: Applying the dual-memory system model to investigate expert teachers' observational skills in natural ecological learning environments

    NASA Astrophysics Data System (ADS)

    Stolpe, Karin; Björklund, Lars

    2012-01-01

    This study aims to investigate two expert ecology teachers' ability to attend to essential details in a complex environment during a field excursion, as well as how they teach this ability to their students. In applying a cognitive dual-memory system model for learning, we also suggest a rationale for their behaviour. The model implies two separate memory systems: the implicit, non-conscious, non-declarative system and the explicit, conscious, declarative system. This model provided the starting point for the research design. However, it was revised from the empirical findings supported by new theoretical insights. The teachers were video and audio recorded during their excursion and interviewed in a stimulated recall setting afterwards. The data were qualitatively analysed using the dual-memory system model. The results show that the teachers used holistic pattern recognition in their own identification of natural objects. However, teachers' main strategy to teach this ability is to give the students explicit rules or specific characteristics. According to the dual-memory system model the holistic pattern recognition is processed in the implicit memory system as a non-conscious match with earlier experienced situations. We suggest that this implicit pattern matching serves as an explanation for teachers' ecological and teaching observational skills. Another function of the implicit memory system is its ability to control automatic behaviour and non-conscious decision-making. The teachers offer the students firsthand sensory experiences which provide a prerequisite for the formation of implicit memories that provides a foundation for expertise.

  4. Glucocorticoids interact with the hippocampal endocannabinoid system in impairing retrieval of contextual fear memory

    PubMed Central

    Atsak, Piray; Hauer, Daniela; Campolongo, Patrizia; Schelling, Gustav; McGaugh, James L.; Roozendaal, Benno

    2012-01-01

    There is extensive evidence that glucocorticoid hormones impair the retrieval of memory of emotionally arousing experiences. Although it is known that glucocorticoid effects on memory retrieval impairment depend on rapid interactions with arousal-induced noradrenergic activity, the exact mechanism underlying this presumably nongenomically mediated glucocorticoid action remains to be elucidated. Here, we show that the hippocampal endocannabinoid system, a rapidly activated retrograde messenger system, is involved in mediating glucocorticoid effects on retrieval of contextual fear memory. Systemic administration of corticosterone (0.3–3 mg/kg) to male Sprague–Dawley rats 1 h before retention testing impaired the retrieval of contextual fear memory without impairing the retrieval of auditory fear memory or directly affecting the expression of freezing behavior. Importantly, a blockade of hippocampal CB1 receptors with AM251 prevented the impairing effect of corticosterone on retrieval of contextual fear memory, whereas the same impairing dose of corticosterone increased hippocampal levels of the endocannabinoid 2-arachidonoylglycerol. We also found that antagonism of hippocampal β-adrenoceptor activity with local infusions of propranolol blocked the memory retrieval impairment induced by the CB receptor agonist WIN55,212–2. Thus, these findings strongly suggest that the endocannabinoid system plays an intermediary role in regulating rapid glucocorticoid effects on noradrenergic activity in impairing memory retrieval of emotionally arousing experiences. PMID:22331883

  5. 77 FR 29875 - Establishment of Class E Airspace; Houston, MO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-21

    ... Instrument Approach Procedures at Houston Memorial Airport. The FAA is taking this action to enhance the safety and management of Instrument Flight Rule (IFR) operations at the airport. DATES: Effective date... accommodate new standard instrument approach procedures at Houston Memorial Airport, Houston, MO. This action...

  6. Spacecraft computer resource margin management. [of Project Galileo Orbiter in-flight reprogramming task

    NASA Technical Reports Server (NTRS)

    Larman, B. T.

    1981-01-01

    The conduction of the Project Galileo Orbiter, with 18 microcomputers and the equivalent of 360K 8-bit bytes of memory contained within two major engineering subsystems and eight science instruments, requires that the key onboard computer system resources be managed in a very rigorous manner. Attention is given to the rationale behind the project policy, the development stage, the preliminary design stage, the design/implementation stage, and the optimization or 'scrubbing' stage. The implementation of the policy is discussed, taking into account the development of the Attitude and Articulation Control Subsystem (AACS) and the Command and Data Subsystem (CDS), the reporting of margin status, and the response to allocation oversubscription.

  7. Optical memory development. Volume 1: prototype memory system

    NASA Technical Reports Server (NTRS)

    Cosentino, L. S.; Mezrich, R. S.; Nagle, E. M.; Stewart, W. C.; Wendt, F. S.

    1972-01-01

    The design, development, and implementation of a prototype, partially populated, million bit read-write holographic memory system using state-of-the-art components are described. The system employs an argon ion laser, acoustooptic beam deflectors, a holographic beam splitter (hololens), a nematic liquid crystal page composer, a photoconductor-thermoplastic erasable storage medium, a silicon P-I-N photodiode array, with lenses and electronics of both conventional and custom design. Operation of the prototype memory system was successfully demonstrated. Careful attention is given to the analysis from which the design criteria were developed. Specifications for the major components are listed, along with the details of their construction and performance. The primary conclusion resulting from this program is that the basic principles of read-write holographic memory system are well understood and are reducible to practice.

  8. Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.

    PubMed

    Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas

    2015-08-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease

    PubMed Central

    Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas

    2016-01-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443

  10. Episodic memories.

    PubMed

    Conway, Martin A

    2009-09-01

    An account of episodic memories is developed that focuses on the types of knowledge they represent, their properties, and the functions they might serve. It is proposed that episodic memories consist of episodic elements, summary records of experience often in the form of visual images, associated to a conceptual frame that provides a conceptual context. Episodic memories are embedded in a more complex conceptual system in which they can become the basis of autobiographical memories. However, the function of episodic memories is to keep a record of progress with short-term goals and access to most episodic memories is lost soon after their formation. Finally, it is suggested that developmentally episodic memories form the basis of the conceptual system and it is from sets of episodic memories that early non-verbal conceptual knowledge is abstracted.

  11. Greater Huachuca Mountains Fire Management Group

    Treesearch

    Brooke S. Gebow; Carol Lambert

    2005-01-01

    The Greater Huachuca Mountains Fire Management Group is developing a fire management plan for 500,000 acres in southeast Arizona. Partner land managers include Arizona State Parks, Arizona State Lands, Audubon Research Ranch, Coronado National Forest, Coronado National Memorial, Fort Huachuca, The Nature Conservancy, San Pedro Riparian National Conservation Area, and...

  12. Noise reduction in optically controlled quantum memory

    NASA Astrophysics Data System (ADS)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  13. The Development of Attention Systems and Working Memory in Infancy

    PubMed Central

    Reynolds, Greg D.; Romano, Alexandra C.

    2016-01-01

    In this article, we review research and theory on the development of attention and working memory in infancy using a developmental cognitive neuroscience framework. We begin with a review of studies examining the influence of attention on neural and behavioral correlates of an earlier developing and closely related form of memory (i.e., recognition memory). Findings from studies measuring attention utilizing looking measures, heart rate, and event-related potentials (ERPs) indicate significant developmental change in sustained and selective attention across the infancy period. For example, infants show gains in the magnitude of the attention related response and spend a greater proportion of time engaged in attention with increasing age (Richards and Turner, 2001). Throughout infancy, attention has a significant impact on infant performance on a variety of tasks tapping into recognition memory; however, this approach to examining the influence of infant attention on memory performance has yet to be utilized in research on working memory. In the second half of the article, we review research on working memory in infancy focusing on studies that provide insight into the developmental timing of significant gains in working memory as well as research and theory related to neural systems potentially involved in working memory in early development. We also examine issues related to measuring and distinguishing between working memory and recognition memory in infancy. To conclude, we discuss relations between the development of attention systems and working memory. PMID:26973473

  14. Role of medial prefrontal cortex serotonin 2A receptors in the control of retrieval of recognition memory in rats.

    PubMed

    Bekinschtein, Pedro; Renner, Maria Constanza; Gonzalez, Maria Carolina; Weisstaub, Noelia

    2013-10-02

    Often, retrieval cues are not uniquely related to one specific memory, which could lead to memory interference. Controlling interference is particularly important during episodic memory retrieval or when remembering specific events in a spatiotemporal context. Despite a clear involvement of prefrontal cortex (PFC) in episodic memory in human studies, information regarding the mechanisms and neurotransmitter systems in PFC involved in memory is scarce. Although the serotoninergic system has been linked to PFC functionality and modulation, its role in memory processing is poorly understood. We hypothesized that the serotoninergic system in PFC, in particular the 5-HT2A receptor (5-HT2AR) could have a role in the control of memory retrieval. In this work we used different versions of the object recognition task in rats to study the role of the serotoninergic modulation in the medial PFC (mPFC) in memory retrieval. We found that blockade of 5-HT2AR in mPFC affects retrieval of an object in context memory in a spontaneous novelty preference task, while sparing single-item recognition memory. We also determined that 5-HT2ARs in mPFC are required for hippocampal-mPFC interaction during retrieval of this type of memory, suggesting that the mPFC controls the expression of memory traces stored in the hippocampus biasing retrieval to the most relevant one.

  15. The Development of Attention Systems and Working Memory in Infancy.

    PubMed

    Reynolds, Greg D; Romano, Alexandra C

    2016-01-01

    In this article, we review research and theory on the development of attention and working memory in infancy using a developmental cognitive neuroscience framework. We begin with a review of studies examining the influence of attention on neural and behavioral correlates of an earlier developing and closely related form of memory (i.e., recognition memory). Findings from studies measuring attention utilizing looking measures, heart rate, and event-related potentials (ERPs) indicate significant developmental change in sustained and selective attention across the infancy period. For example, infants show gains in the magnitude of the attention related response and spend a greater proportion of time engaged in attention with increasing age (Richards and Turner, 2001). Throughout infancy, attention has a significant impact on infant performance on a variety of tasks tapping into recognition memory; however, this approach to examining the influence of infant attention on memory performance has yet to be utilized in research on working memory. In the second half of the article, we review research on working memory in infancy focusing on studies that provide insight into the developmental timing of significant gains in working memory as well as research and theory related to neural systems potentially involved in working memory in early development. We also examine issues related to measuring and distinguishing between working memory and recognition memory in infancy. To conclude, we discuss relations between the development of attention systems and working memory.

  16. Different Ways to Cue a Coherent Memory System: A Theory for Episodic, Semantic, and Procedural Tasks.

    ERIC Educational Resources Information Center

    Humphreys, Michael S.; And Others

    1989-01-01

    An associative theory of memory is proposed to serve as a counterexample to claims that dissociations among episodic, semantic, and procedural memory tasks necessitate separate memory systems. The theory is based on task analyses of matching (recognition and familiarity judgments), retrieval (cued recall), and production (free association). (TJH)

  17. Out of Place, Out of Mind: Schema-Driven False Memory Effects for Object-Location Bindings

    ERIC Educational Resources Information Center

    Lew, Adina R.; Howe, Mark L.

    2017-01-01

    Events consist of diverse elements, each processed in specialized neocortical networks, with temporal lobe memory systems binding these elements to form coherent event memories. We provide a novel theoretical analysis of an unexplored consequence of the independence of memory systems for elements and their bindings, 1 that raises the paradoxical…

  18. Technology To Enhance Special Education: Remediation of Problems in Logical Thinking and Memory. Final Report.

    ERIC Educational Resources Information Center

    Cavalier, Al; And Others

    A federally sponsored project was designed to incorporate a memory-assessment task and a memory strategy into a computer-based instructional system for assessing and assisting in remediating basic memory-processing and metacognitive deficiencies. The project resulted in an instructional system for school-aged children and youth with mild to…

  19. Projection multiplex recording of computer-synthesised one-dimensional Fourier holograms for holographic memory systems: mathematical and experimental modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betin, A Yu; Bobrinev, V I; Verenikina, N M

    A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)

  20. A model for memory systems based on processing modes rather than consciousness.

    PubMed

    Henke, Katharina

    2010-07-01

    Prominent models of human long-term memory distinguish between memory systems on the basis of whether learning and retrieval occur consciously or unconsciously. Episodic memory formation requires the rapid encoding of associations between different aspects of an event which, according to these models, depends on the hippocampus and on consciousness. However, recent evidence indicates that the hippocampus mediates rapid associative learning with and without consciousness in humans and animals, for long-term and short-term retention. Consciousness seems to be a poor criterion for differentiating between declarative (or explicit) and non declarative (or implicit) types of memory. A new model is therefore required in which memory systems are distinguished based on the processing operations involved rather than by consciousness.

  1. Biomaterial-based Memory Device Development by Conducting Metallic DNA

    DTIC Science & Technology

    2013-05-28

    time. Therefore, we have created a multiple-states memory system . This is the first multi-states resistance memory device by using bio-nanowire of the...world. Based on this achievement, logic device and application will be developed in the near future, too. Moreover, by using Ni-DNA detection system ...ions in DNA can change the resistance of Ni-DNA by applying different polar bias and time. Therefore, we have created a multiple-states memory system

  2. Including Memory Friction in Single- and Two-State Quantum Dynamics Simulations.

    PubMed

    Brown, Paul A; Messina, Michael

    2016-03-03

    We present a simple computational algorithm that allows for the inclusion of memory friction in a quantum dynamics simulation of a small, quantum, primary system coupled to many atoms in the surroundings. We show how including a memory friction operator, F̂, in the primary quantum system's Hamiltonian operator builds memory friction into the dynamics of the primary quantum system. We show that, in the harmonic, semi-classical limit, this friction operator causes the classical phase-space centers of a wavepacket to evolve exactly as if it were a classical particle experiencing memory friction. We also show that this friction operator can be used to include memory friction in the quantum dynamics of an anharmonic primary system. We then generalize the algorithm so that it can be used to treat a primary quantum system that is evolving, non-adiabatically on two coupled potential energy surfaces, i.e., a model that can be used to model H atom transfer, for example. We demonstrate this approach's computational ease and flexibility by showing numerical results for both harmonic and anharmonic primary quantum systems in the single surface case. Finally, we present numerical results for a model of non-adiabatic H atom transfer between a reactant and product state that includes memory friction on one or both of the non-adiabatic potential energy surfaces and uncover some interesting dynamical effects of non-memory friction on the H atom transfer process.

  3. Digital item for digital human memory--television commerce application: family tree albuming system

    NASA Astrophysics Data System (ADS)

    Song, Jaeil; Lee, Hyejoo; Hong, JinWoo

    2004-01-01

    Technical advance in creating, storing digital media in daily life enables computers to capture human life and remember it as people do. A critical point with digitizing human life is how to recall bits of experience that are associated by semantic information. This paper proposes a technique for structuring dynamic digital object based on MPEG-21 Digital Item (DI) in order to recall human"s memory and providing interactive TV service on family tree albuming system as one of its applications. DIs are a dynamically reconfigurable, uniquely identified, described by a descriptor language, logical unit for structuring relationship among multiple media resources. Digital Item Processing (DIP) provides the means to interact with DIs to remind context to user, with active properties where objects have executable properties. Each user can adapt DIs" active properties to tailor the behavior of DIs to match his/her own specific needs. DIs" technologies in Intellectual Property Management and Protection (IPMP) can be used for privacy protection. In the interaction between the social space and technological space, the internal dynamics of family life fits well sharing family albuming service via family television. Family albuming service can act as virtual communities builders for family members. As memory is shared between family members, multiple annotations (including active properties on contextual information) will be made with snowballing value.

  4. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  5. The Identification and Assessment of Late-life ADHD in Memory Clinics

    PubMed Central

    Fischer, Barbara L.; Gunter-Hunt, Gail; Steinhafel, Courtney Holm; Howell, Timothy

    2013-01-01

    INTRODUCTION Little data exists about attention deficit hyperactivity disorder (ADHD) in late life. While evaluating patients’ memory problems, our Memory Clinic staff has periodically identified ADHD in previously undiagnosed adults. We conducted a survey to assess the extent to which other memory clinics view ADHD as a relevant clinical issue. METHOD We developed and sent a questionnaire to Memory Clinics in the United States to ascertain how ADHD was identified and addressed. The percentage of responding memory clinics’ means of assessing and managing late-life ADHD comprised the measurements for this study. RESULTS Approximately one-half of responding memory clinics reported seeing ADHD patients. Of these, one-half reported identifying previously diagnosed cases, and almost one-half reported diagnosing ADHD themselves. One fifth of clinics reported screening regularly for ADHD, and few clinics described treatment methods. CONCLUSION Our results suggest that U.S. memory clinics may not adequately identify and address ADHD in late life. PMID:22173147

  6. Behavioural and magnetoencephalographic evidence for the interaction between semantic and episodic memory in healthy elderly subjects.

    PubMed

    La Corte, Valentina; Dalla Barba, Gianfranco; Lemaréchal, Jean-Didier; Garnero, Line; George, Nathalie

    2012-10-01

    The relationship between episodic and semantic memory systems has long been debated. Some authors argue that episodic memory is contingent on semantic memory (Tulving 1984), while others postulate that both systems are independent since they can be selectively damaged (Squire 1987). The interaction between these memory systems is particularly important in the elderly, since the dissociation of episodic and semantic memory defects characterize different aging-related pathologies. Here, we investigated the interaction between semantic knowledge and episodic memory processes associated with faces in elderly subjects using an experimental paradigm where the semantic encoding of famous and unknown faces was compared to their episodic recognition. Results showed that the level of semantic awareness of items affected the recognition of those items in the episodic memory task. Event-related magnetic fields confirmed this interaction between episodic and semantic memory: ERFs related to the old/new effect during the episodic task were markedly different for famous and unknown faces. The old/new effect for famous faces involved sustained activities maximal over right temporal sensors, showing a spatio-temporal pattern partly similar to that found for famous versus unknown faces during the semantic task. By contrast, an old/new effect for unknown faces was observed on left parieto-occipital sensors. These findings suggest that the episodic memory for famous faces activated the retrieval of stored semantic information, whereas it was based on items' perceptual features for unknown faces. Overall, our results show that semantic information interfered markedly with episodic memory processes and suggested that the neural substrates of these two memory systems overlap.

  7. Contribution of the Cholinergic System to Verbal Memory Performance in Mild Cognitive Impairment.

    PubMed

    Peter, Jessica; Lahr, Jacob; Minkova, Lora; Lauer, Eliza; Grothe, Michel J; Teipel, Stefan; Köstering, Lena; Kaller, Christoph P; Heimbach, Bernhard; Hüll, Michael; Normann, Claus; Nissen, Christoph; Reis, Janine; Klöppel, Stefan

    2016-06-18

    Acetylcholine is critically involved in modulating learning and memory function, which both decline in neurodegeneration. It remains unclear to what extent structural and functional changes in the cholinergic system contribute to episodic memory dysfunction in mild cognitive impairment (MCI), in addition to hippocampal degeneration. A better understanding is critical, given that the cholinergic system is the main target of current symptomatic treatment in mild to moderate Alzheimer's disease. We simultaneously assessed the structural and functional integrity of the cholinergic system in 20 patients with MCI and 20 matched healthy controls and examined their effect on verbal episodic memory via multivariate regression analyses. Mediating effects of either cholinergic function or hippocampal volume on the relationship between cholinergic structure and episodic memory were computed. In MCI, a less intact structure and function of the cholinergic system was found. A smaller cholinergic structure was significantly correlated with a functionally more active cholinergic system in patients, but not in controls. This association was not modulated by age or disease severity, arguing against compensational processes. Further analyses indicated that neither functional nor structural changes in the cholinergic system influence verbal episodic memory at the MCI stage. In fact, those associations were fully mediated by hippocampal volume. Although the cholinergic system is structurally and functionally altered in MCI, episodic memory dysfunction results primarily from hippocampal neurodegeneration, which may explain the inefficiency of cholinergic treatment at this disease stage.

  8. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  10. A system-level approach for embedded memory robustness

    NASA Astrophysics Data System (ADS)

    Mariani, Riccardo; Boschi, Gabriele

    2005-11-01

    New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.

  11. A behavioral rehabilitation intervention for amnestic Mild Cognitive Impairment

    PubMed Central

    Greenaway, Melanie C.; Hanna, Sherrie M.; Lepore, Susan W.; Smith, Glenn E.

    2010-01-01

    Individuals with amnestic Mild Cognitive Impairment (MCI) currently have few treatment options for combating their memory loss. The Memory Support System (MSS) is a calendar and organization system with accompanying 6-week curriculum designed for individuals with progressive memory impairment. Ability to learn the MSS and its utility were assessed in 20 participants. Participants were significantly more likely to successfully use the calendar system after training. Ninety-five percent were compliant with the MSS at training completion, and 89% continued to be compliant at follow-up. Outcome measures revealed a medium effect size for improvement in functional ability. Subjects further reported improved independence, self-confidence, and mood. This initial examination of the MSS suggests that with appropriate training, individuals with amnestic MCI can and will use a memory notebook system to help compensate for memory loss. These results are encouraging that the MSS may help with the symptoms of memory decline in MCI. PMID:18955724

  12. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.

  13. Physical Memory Management in a Network Operating System

    DTIC Science & Technology

    1988-11-22

    I \\\\’BOC II 26% I 1171 2016 3187 II 17% 820 1385 2205 1145% 745 1354 I 2099 4.33 2.42 11.65 4.85 5.67 3.28 8.15 5.25 5.63 2.64 50.15 6.79...9484 26311 33623 27255 60879 0.96 0.97 0.85 0.74 0.78 1.10 1.00 1.06 0.96 0.86 0.92 120 20.10 I 751 1523 704 1619 23177 9484 32661 37788 25521 63310

  14. Integration Toolkit and Methods (ITKM) Corporate Data Integration Tools (CDIT). Review of the State-of-the-Art with Respect to Integration Toolkits and Methods (ITKM)

    DTIC Science & Technology

    1992-06-01

    system capabilities \\Jch as memory management and network communications are provided by a virtual machine-type operating environment. Various human ...thinking. The elements of this substrate include representational formality, genericity, a method of formal analysis, and augmentation of human analytical...the form of identifying: the data entity itself; its aliases (including how the data is presented th programs or human users in the form of copy

  15. 75 FR 55846 - Draft Re-Evaluation for Environmental Impact Statement: Sikorsky Memorial Airport, Stratford, CT

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-14

    ..., Environmental Program Manager, Federal Aviation Administration New England, 12 New England Executive Park... Memorial Airport in Stratford, Connecticut. The document will assist the FAA in determining the suitability... following locations: FAA New England Region, 16 New England Executive Park, Burlington, MA, 781-238-7613...

  16. The CB1 receptor antagonist AM251 impairs reconsolidation of pavlovian fear memory in the rat basolateral amygdala.

    PubMed

    Ratano, Patrizia; Everitt, Barry J; Milton, Amy L

    2014-10-01

    We have investigated the requirement for signaling at CB1 receptors in the reconsolidation of a previously consolidated auditory fear memory, by infusing the CB1 receptor antagonist AM251, or the FAAH inhibitor URB597, directly into the basolateral amygdala (BLA) in conjunction with memory reactivation. AM251 disrupted memory restabilization, but only when administered after reactivation. URB597 produced a small, transient enhancement of memory restabilization when administered after reactivation. The amnestic effect of AM251 was rescued by coadministration of the GABAA receptor antagonist bicuculline at reactivation, indicating that the disruption of reconsolidation was mediated by altered GABAergic transmission in the BLA. These data show that the endocannabinoid system in the BLA is an important modulator of fear memory reconsolidation and that its effects on memory are mediated by an interaction with the GABAergic system. Thus, targeting the endocannabinoid system may have therapeutic potential to reduce the impact of maladaptive memories in neuropsychiatric disorders such as posttraumatic stress disorder.

  17. Automatic multi-banking of memory for microprocessors

    NASA Technical Reports Server (NTRS)

    Wiker, G. A. (Inventor)

    1984-01-01

    A microprocessor system is provided with added memories to expand its address spaces beyond its address word length capacity by using indirect addressing instructions of a type having a detectable operations code and dedicating designated address spaces of memory to each of the added memories, one space to a memory. By decoding each operations code of instructions read from main memory into a decoder to identify indirect addressing instructions of the specified type, and then decoding the address that follows in a decoder to determine which added memory is associated therewith, the associated added memory is selectively enabled through a unit while the main memory is disabled to permit the instruction to be executed on the location to which the effective address of the indirect address instruction points, either before the indirect address is read from main memory or afterwards, depending on how the system is arranged by a switch.

  18. Extended write combining using a write continuation hint flag

    DOEpatents

    Chen, Dong; Gara, Alan; Heidelberger, Philip; Ohmacht, Martin; Vranas, Pavlos

    2013-06-04

    A computing apparatus for reducing the amount of processing in a network computing system which includes a network system device of a receiving node for receiving electronic messages comprising data. The electronic messages are transmitted from a sending node. The network system device determines when more data of a specific electronic message is being transmitted. A memory device stores the electronic message data and communicating with the network system device. A memory subsystem communicates with the memory device. The memory subsystem stores a portion of the electronic message when more data of the specific message will be received, and the buffer combines the portion with later received data and moves the data to the memory device for accessible storage.

  19. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  20. A large-scale cryoelectronic system for biological sample banking

    NASA Astrophysics Data System (ADS)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  1. Test results management and distributed cognition in electronic health record-enabled primary care.

    PubMed

    Smith, Michael W; Hughes, Ashley M; Brown, Charnetta; Russo And, Elise; Giardina, Traber D; Mehta, Praveen; Singh, Hardeep

    2018-06-01

    Managing abnormal test results in primary care involves coordination across various settings. This study identifies how primary care teams manage test results in a large, computerized healthcare system in order to inform health information technology requirements for test results management and other distributed healthcare services. At five US Veterans Health Administration facilities, we interviewed 37 primary care team members, including 16 primary care providers, 12 registered nurses, and 9 licensed practical nurses. We performed content analysis using a distributed cognition approach, identifying patterns of information transmission across people and artifacts (e.g. electronic health records). Results illustrate challenges (e.g. information overload) as well as strategies used to overcome challenges. Various communication paths were used. Some team members served as intermediaries, processing information before relaying it. Artifacts were used as memory aids. Health information technology should address the risks of distributed work by supporting awareness of team and task status for reliable management of results.

  2. Noradrenergic activation of the basolateral amygdala maintains hippocampus-dependent accuracy of remote memory.

    PubMed

    Atucha, Erika; Vukojevic, Vanja; Fornari, Raquel V; Ronzoni, Giacomo; Demougin, Philippe; Peter, Fabian; Atsak, Piray; Coolen, Marcel W; Papassotiropoulos, Andreas; McGaugh, James L; de Quervain, Dominique J-F; Roozendaal, Benno

    2017-08-22

    Emotional enhancement of memory by noradrenergic mechanisms is well-described, but the long-term consequences of such enhancement are poorly understood. Over time, memory traces are thought to undergo a neural reorganization, that is, a systems consolidation, during which they are, at least partly, transferred from the hippocampus to neocortical networks. This transfer is accompanied by a decrease in episodic detailedness. Here we investigated whether norepinephrine (NE) administration into the basolateral amygdala after training on an inhibitory avoidance discrimination task, comprising two distinct training contexts, alters systems consolidation dynamics to maintain episodic-like accuracy and hippocampus dependency of remote memory. At a 2-d retention test, both saline- and NE-treated rats accurately discriminated the training context in which they had received footshock. Hippocampal inactivation with muscimol before retention testing disrupted discrimination of the shock context in both treatment groups. At 28 d, saline-treated rats showed hippocampus-independent retrieval and lack of discrimination. In contrast, NE-treated rats continued to display accurate memory of the shock-context association. Hippocampal inactivation at this remote retention test blocked episodic-like accuracy and induced a general memory impairment. These findings suggest that the NE treatment altered systems consolidation dynamics by maintaining hippocampal involvement in the memory. This shift in systems consolidation was paralleled by time-regulated DNA methylation and transcriptional changes of memory-related genes, namely Reln and Pkm ζ, in the hippocampus and neocortex. The findings provide evidence suggesting that consolidation of emotional memories by noradrenergic mechanisms alters systems consolidation dynamics and, as a consequence, influences the maintenance of long-term episodic-like accuracy of memory.

  3. Ground Data System Analysis Tools to Track Flight System State Parameters for the Mars Science Laboratory (MSL) and Beyond

    NASA Technical Reports Server (NTRS)

    Allard, Dan; Deforrest, Lloyd

    2014-01-01

    Flight software parameters enable space mission operators fine-tuned control over flight system configurations, enabling rapid and dynamic changes to ongoing science activities in a much more flexible manner than can be accomplished with (otherwise broadly used) configuration file based approaches. The Mars Science Laboratory (MSL), Curiosity, makes extensive use of parameters to support complex, daily activities via commanded changes to said parameters in memory. However, as the loss of Mars Global Surveyor (MGS) in 2006 demonstrated, flight system management by parameters brings with it risks, including the possibility of losing track of the flight system configuration and the threat of invalid command executions. To mitigate this risk a growing number of missions have funded efforts to implement parameter tracking parameter state software tools and services including MSL and the Soil Moisture Active Passive (SMAP) mission. This paper will discuss the engineering challenges and resulting software architecture of MSL's onboard parameter state tracking software and discuss the road forward to make parameter management tools suitable for use on multiple missions.

  4. An Agent-Based Model for the Role of Short-Term Memory Enhancement in the Emergence of Grammatical Agreement.

    PubMed

    Vera, Javier

    2018-01-01

    What is the influence of short-term memory enhancement on the emergence of grammatical agreement systems in multi-agent language games? Agreement systems suppose that at least two words share some features with each other, such as gender, number, or case. Previous work, within the multi-agent language-game framework, has recently proposed models stressing the hypothesis that the emergence of a grammatical agreement system arises from the minimization of semantic ambiguity. On the other hand, neurobiological evidence argues for the hypothesis that language evolution has mainly related to an increasing of short-term memory capacity, which has allowed the online manipulation of words and meanings participating particularly in grammatical agreement systems. Here, the main aim is to propose a multi-agent language game for the emergence of a grammatical agreement system, under measurable long-range relations depending on the short-term memory capacity. Computer simulations, based on a parameter that measures the amount of short-term memory capacity, suggest that agreement marker systems arise in a population of agents equipped at least with a critical short-term memory capacity.

  5. A cognitive neuroscience account of posttraumatic stress disorder and its treatment.

    PubMed

    Brewin, C R

    2001-04-01

    Recent research in the areas of animal conditioning, the neural systems underlying emotion and memory, and the effect of fear on these systems is reviewed. This evidence points to an important distinction between hippocampally-dependent and non-hippocampally-dependent forms of memory that are differentially affected by extreme stress. The cognitive science perspective is related to a recent model of posttraumatic stress disorder, dual representation theory, that also posits separate memory systems underlying vivid reexperiencing versus ordinary autobiographical memories of trauma. This view is compared with other accounts in the literature of traumatic memory processes in PTSD, and the contrasting implications for therapy are discussed.

  6. Non-volatile memory for checkpoint storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.

    A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less

  7. Memory Overview - Technologies and Needs

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2010-01-01

    As NASA has evolved it's usage of spaceflight computing, memory applications have followed as well. In this talk, we will discuss the history of NASA's memories from magnetic core and tape recorders to current semiconductor approaches. We will briefly describe current functional memory usage in NASA space systems followed by a description of potential radiation-induced failure modes along with considerations for reliable system design.

  8. Method and device for maximizing memory system bandwidth by accessing data in a dynamically determined order

    NASA Technical Reports Server (NTRS)

    Schwab, Andrew J. (Inventor); Aylor, James (Inventor); Hitchcock, Charles Young (Inventor); Wulf, William A. (Inventor); McKee, Sally A. (Inventor); Moyer, Stephen A. (Inventor); Klenke, Robert (Inventor)

    2000-01-01

    A data processing system is disclosed which comprises a data processor and memory control device for controlling the access of information from the memory. The memory control device includes temporary storage and decision ability for determining what order to execute the memory accesses. The compiler detects the requirements of the data processor and selects the data to stream to the memory control device which determines a memory access order. The order in which to access said information is selected based on the location of information stored in the memory. The information is repeatedly accessed from memory and stored in the temporary storage until all streamed information is accessed. The information is stored until required by the data processor. The selection of the order in which to access information maximizes bandwidth and decreases the retrieval time.

  9. Multiple transient memories in sheared suspensions: Robustness, structure, and routes to plasticity

    NASA Astrophysics Data System (ADS)

    Keim, Nathan C.; Paulsen, Joseph D.; Nagel, Sidney R.

    2013-09-01

    Multiple transient memories, originally discovered in charge-density-wave conductors, are a remarkable and initially counterintuitive example of how a system can store information about its driving. In this class of memories, a system can learn multiple driving inputs, nearly all of which are eventually forgotten despite their continual input. If sufficient noise is present, the system regains plasticity so that it can continue to learn new memories indefinitely. Recently, Keim and Nagel [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.010603 107, 010603 (2011)] showed how multiple transient memories could be generalized to a generic driven disordered system with noise, giving as an example simulations of a simple model of a sheared non-Brownian suspension. Here, we further explore simulation models of suspensions under cyclic shear, focusing on three main themes: robustness, structure, and overdriving. We show that multiple transient memories are a robust feature independent of many details of the model. The steady-state spatial distribution of the particles is sensitive to the driving algorithm; nonetheless, the memory formation is independent of such a change in particle correlations. Finally, we demonstrate that overdriving provides another means for controlling memory formation and retention.

  10. Inhibition of protein synthesis but not β-adrenergic receptors blocks reconsolidation of a cocaine-associated cue memory

    PubMed Central

    Dunbar, Amber B.

    2016-01-01

    Previously consolidated memories have the potential to enter a state of lability upon memory recall, during which time the memory can be altered before undergoing an additional consolidation-like process and being stored again as a long-term memory. Blocking reconsolidation of aberrant memories has been proposed as a potential treatment for psychiatric disorders including addiction. Here we investigated of the effect of systemically administering the protein synthesis inhibitor cycloheximide or the β-adrenergic antagonist propranolol on reconsolidation. Rats were trained to self-administer cocaine, during which each lever press resulted in the presentation of a cue paired with an intravenous infusion of cocaine. After undergoing lever press extinction to reduce operant responding, the cue memory was reactivated and rats were administered systemic injections of propranolol, cycloheximide, or vehicle. Post-reactivation cycloheximide, but not propranolol, resulted in a reactivation-dependent decrease in cue-induced reinstatement, indicative of reconsolidation blockade by protein synthesis inhibition. The present data indicate that systemically targeting protein synthesis as opposed to the β-adrenergic system may more effectively attenuate the reconsolidation of a drug-related memory and decrease drug-seeking behavior. PMID:27421890

  11. Inhibition of protein synthesis but not β-adrenergic receptors blocks reconsolidation of a cocaine-associated cue memory.

    PubMed

    Dunbar, Amber B; Taylor, Jane R

    2016-08-01

    Previously consolidated memories have the potential to enter a state of lability upon memory recall, during which time the memory can be altered before undergoing an additional consolidation-like process and being stored again as a long-term memory. Blocking reconsolidation of aberrant memories has been proposed as a potential treatment for psychiatric disorders including addiction. Here we investigated of the effect of systemically administering the protein synthesis inhibitor cycloheximide or the β-adrenergic antagonist propranolol on reconsolidation. Rats were trained to self-administer cocaine, during which each lever press resulted in the presentation of a cue paired with an intravenous infusion of cocaine. After undergoing lever press extinction to reduce operant responding, the cue memory was reactivated and rats were administered systemic injections of propranolol, cycloheximide, or vehicle. Post-reactivation cycloheximide, but not propranolol, resulted in a reactivation-dependent decrease in cue-induced reinstatement, indicative of reconsolidation blockade by protein synthesis inhibition. The present data indicate that systemically targeting protein synthesis as opposed to the β-adrenergic system may more effectively attenuate the reconsolidation of a drug-related memory and decrease drug-seeking behavior. © 2016 Dunbar and Taylor; Published by Cold Spring Harbor Laboratory Press.

  12. Interaction between basal ganglia and limbic circuits in learning and memory processes.

    PubMed

    Calabresi, Paolo; Picconi, Barbara; Tozzi, Alessandro; Ghiglieri, Veronica

    2016-01-01

    Hippocampus and striatum play distinctive roles in memory processes since declarative and non-declarative memory systems may act independently. However, hippocampus and striatum can also be engaged to function in parallel as part of a dynamic system to integrate previous experience and adjust behavioral responses. In these structures the formation, storage, and retrieval of memory require a synaptic mechanism that is able to integrate multiple signals and to translate them into persistent molecular traces at both the corticostriatal and hippocampal/limbic synapses. The best cellular candidate for this complex synthesis is represented by long-term potentiation (LTP). A common feature of LTP expressed in these two memory systems is the critical requirement of convergence and coincidence of glutamatergic and dopaminergic inputs to the dendritic spines of the neurons expressing this form of synaptic plasticity. In experimental models of Parkinson's disease abnormal accumulation of α-synuclein affects these two memory systems by altering two major synaptic mechanisms underlying cognitive functions in cholinergic striatal neurons, likely implicated in basal ganglia dependent operative memory, and in the CA1 hippocampal region, playing a central function in episodic/declarative memory processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Nanoscale Mechanical Stimulation Method for Quantifying C. elegans Mechanosensory Behavior and Memory.

    PubMed

    Sugi, Takuma; Okumura, Etsuko; Kiso, Kaori; Igarashi, Ryuji

    2016-01-01

    Withdrawal escape response of C. elegans to nonlocalized vibration is a useful behavioral paradigm to examine mechanisms underlying mechanosensory behavior and its memory-dependent change. However, there are very few methods for investigating the degree of vibration frequency, amplitude and duration needed to induce behavior and memory. Here, we establish a new system to quantify C. elegans mechanosensory behavior and memory using a piezoelectric sheet speaker. In the system, we can flexibly change the vibration properties at a nanoscale displacement level and quantify behavioral responses under each vibration property. This system is an economic setup and easily replicated in other laboratories. By using the system, we clearly detected withdrawal escape responses and confirmed habituation memory. This system will facilitate the understanding of physiological aspects of C. elegans mechanosensory behavior in the future.

  14. Memory: Organization and Control

    PubMed Central

    Eichenbaum, Howard

    2017-01-01

    A major goal of memory research is to understand how cognitive processes in memory are supported at the level of brain systems and network representations. Especially promising in this direction are new findings in humans and animals that converge in indicating a key role for the hippocampus in the systematic organization of memories. New findings also indicate that the prefrontal cortex may play an equally important role in the active control of memory organization during both encoding and retrieval. Observations about the dialog between the hippocampus and prefrontal cortex provide new insights into the operation of the larger brain system that serves memory. PMID:27687117

  15. Interference from mere thinking: mental rehearsal temporarily disrupts recall of motor memory.

    PubMed

    Yin, Cong; Wei, Kunlin

    2014-08-01

    Interference between successively learned tasks is widely investigated to study motor memory. However, how simultaneously learned motor memories interact with each other has been rarely studied despite its prevalence in daily life. Assuming that motor memory shares common neural mechanisms with declarative memory system, we made unintuitive predictions that mental rehearsal, as opposed to further practice, of one motor memory will temporarily impair the recall of another simultaneously learned memory. Subjects simultaneously learned two sensorimotor tasks, i.e., visuomotor rotation and gain. They retrieved one memory by either practice or mental rehearsal and then had their memory evaluated. We found that mental rehearsal, instead of execution, impaired the recall of unretrieved memory. This impairment was content-independent, i.e., retrieving either gain or rotation impaired the other memory. Hence, conscious recollection of one motor memory interferes with the recall of another memory. This is analogous to retrieval-induced forgetting in declarative memory, suggesting a common neural process across memory systems. Our findings indicate that motor imagery is sufficient to induce interference between motor memories. Mental rehearsal, currently widely regarded as beneficial for motor performance, negatively affects memory recall when it is exercised for a subset of memorized items. Copyright © 2014 the American Physiological Society.

  16. A Challenge for Radioactive Waste Management: Memory Preservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charton, P.; Ouzounian, G.

    2008-07-01

    ANDRA, the French National Radioactive Waste Management Agency, is responsible for managing all radioactive waste in France over the long term. In the case of short-lived waste for which disposal facilities have a life expectancy of a few centuries, the Agency has set up a system for preserving the memory of those sites. Based on the historical analysis on a comparable timescale and on an appraisal of information-conservation means, a series of regulatory as well as technical provisions was made in order to ensure that sound information be transmitted to future generations. Requirements associated to the provisions deal mostly withmore » legibility and a clear understanding of the information that must be decrypted and understood at least during the lifetime of the facilities (i.e., a few centuries). It must therefore be preserved throughout the same period. Responses to the requirements will be presented notably on various information-recording media, together with the information-diffusion strategy to the different authorities and structures within French society. A concrete illustration of the achievements made so far is the Centre de la Manche Disposal Facility, which was closed down in 1994 and is currently in its post-closure monitoring phase since 2003. In the case of deep geological repositories for long-lived radioactive waste, preserving memory takes a different aspect. First of all, timescales are much longer and are counted in hundreds of thousands of years. It is therefore much more difficult to consider how to maintain the richness of the information over such time periods than it is for short-lived waste. Both the nature and the form of the information to be transmitted must be revised. It would be risky indeed to base memory preservation over the long term on similar mechanisms beyond 1,000 years. Based on the heritage of a much more ancient history, we must seek to find appropriate means in order to develop surface markers and even more to ensure their conservation over compatible timescales with those of deep geological repositories. It will also be necessary, in the light of the experiments and efforts made in order to decrypt the messages written on rupestral paintings or in pyramids, find suitable expression means that will help, not the next few generations, but much more future generations, to grasp the meaning of what we aim at transmitting them. This paper presents the state of the French reflection on memory preservation and transmission over the very long term, for timescales consistent with the long-lived radioactive geological waste disposal projects. (author)« less

  17. Computer memory power control for the Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  18. Intact haptic priming in normal aging and Alzheimer's disease: evidence for dissociable memory systems.

    PubMed

    Ballesteros, Soledad; Reales, José Manuel

    2004-01-01

    This study is the first to report complete priming in Alzheimer's disease (AD) patients and older control subjects for objects presented haptically. To investigate possible dissociations between implicit and explicit objects representations, young adults, Alzheimer's patients, and older controls performed a speeded object naming task followed by a recognition task. Similar haptic priming was exhibited by the three groups, although young adults responded faster than the two older groups. Furthermore, there was no difference in performance between the two healthy groups. On the other hand, younger and older healthy adults did not differ on explicit recognition while, as expected, AD patients were highly impaired. The double dissociation suggests that different memory systems mediate both types of memory tasks. The preservation of intact haptic priming in AD provides strong support to the idea that object implicit memory is mediated by a memory system that is different from the medial-temporal diencephalic system underlying explicit memory, which is impaired early in AD. Recent imaging and behavioral studies suggest that the implicit memory system may depend on extrastriate areas of the occipital cortex although somatosensory cortical mechanisms may also be involved.

  19. A Cognitive Task Analysis of Information Management Strategies in a Computerized Provider Order Entry Environment

    PubMed Central

    Weir, Charlene R.; Nebeker, Jonathan J.R.; Hicken, Bret L.; Campo, Rebecca; Drews, Frank; LeBar, Beth

    2007-01-01

    Objective Computerized Provider Order Entry (CPOE) with electronic documentation, and computerized decision support dramatically changes the information environment of the practicing clinician. Prior work patterns based on paper, verbal exchange, and manual methods are replaced with automated, computerized, and potentially less flexible systems. The objective of this study is to explore the information management strategies that clinicians use in the process of adapting to a CPOE system using cognitive task analysis techniques. Design Observation and semi-structured interviews were conducted with 88 primary-care clinicians at 10 Veterans Administration Medical Centers. Measurements Interviews were taped, transcribed, and extensively analyzed to identify key information management goals, strategies, and tasks. Tasks were aggregated into groups, common components across tasks were clarified, and underlying goals and strategies identified. Results Nearly half of the identified tasks were not fully supported by the available technology. Six core components of tasks were identified. Four meta-cognitive information management goals emerged: 1) Relevance Screening; 2) Ensuring Accuracy; 3) Minimizing memory load; and 4) Negotiating Responsibility. Strategies used to support these goals are presented. Conclusion Users develop a wide array of information management strategies that allow them to successfully adapt to new technology. Supporting the ability of users to develop adaptive strategies to support meta-cognitive goals is a key component of a successful system. PMID:17068345

  20. Using value-based analysis to influence outcomes in complex surgical systems.

    PubMed

    Kirkpatrick, John R; Marks, Stanley; Slane, Michele; Kim, Donald; Cohen, Lance; Cortelli, Michael; Plate, Juan; Perryman, Richard; Zapas, John

    2015-04-01

    Value-based analysis (VBA) is a management strategy used to determine changes in value (quality/cost) when a usual practice (UP) is replaced by a best practice (BP). Previously validated in clinical initiatives, its usefulness in complex systems is unknown. To answer this question, we used VBA to correct deficiencies in cardiac surgery at Memorial Healthcare System. Cardiac surgery is a complex surgical system that lends itself to VBA because outcomes metrics provided by the Society of Thoracic Surgeons provide an estimate of quality; cost is available from Centers for Medicare and Medicaid Services and other contemporary sources; the UP can be determined; and the best practice can be established. Analysis of the UP at Memorial Healthcare System revealed considerable deficiencies in selection of patients for surgery; the surgery itself, including choice of procedure and outcomes; after care; follow-up; and control of expenditures. To correct these deficiencies, each UP was replaced with a BP. Changes included replacement of most of the cardiac surgeons; conversion to an employed physician model; restructuring of a heart surgery unit; recruitment of cardiac anesthesiologists; introduction of an interactive educational program; eliminating unsafe practices; and reducing cost. There was a significant (p < 0.01) reduction in readmissions, complications, and mortality between 2009 and 2013. Memorial Healthcare System was only 1 of 17 (1.7%) database participants (n = 1,009) to achieve a Society of Thoracic Surgeons 3-star rating in all 3 measured categories. Despite substantial improvements in quality, the cost per case and the length of stay declined. These changes created a savings opportunity of $14 million, with actual savings of $10.4 million. These findings suggest that VBA can be a powerful tool to enhance value (quality/cost) in a complex surgical system. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

Top