Sample records for memory systems high-performance

  1. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  2. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A B; de Supinski, B; Mueller, F

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less

  3. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  4. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  5. Non-volatile memory for checkpoint storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.

    A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less

  6. Dual Tasking and Working Memory in Alcoholism: Relation to Frontocerebellar Circuitry

    PubMed Central

    Chanraud, Sandra; Pitel, Anne-Lise; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V

    2010-01-01

    Controversy exists regarding the role of cerebellar systems in cognition and whether working memory compromise commonly marking alcoholism can be explained by compromise of nodes of corticocerebellar circuitry. We tested 17 alcoholics and 31 age-matched controls with dual-task, working memory paradigms. Interference tasks competed with verbal and spatial working memory tasks using low (three item) or high (six item) memory loads. Participants also underwent structural MRI to obtain volumes of nodes of the frontocerebellar system. On the verbal working memory task, both groups performed equally. On the spatial working memory with the high-load task, the alcoholic group was disproportionately more affected by the arithmetic distractor than were controls. In alcoholics, volumes of the left thalamus and left cerebellar Crus I volumes were more robust predictors of performance in the spatial working memory task with the arithmetic distractor than the left frontal superior cortex. In controls, volumes of the right middle frontal gyrus and right cerebellar Crus I were independent predictors over the left cerebellar Crus I, left thalamus, right superior parietal cortex, or left middle frontal gyrus of spatial working memory performance with tracking interference. The brain–behavior correlations suggest that alcoholics and controls relied on the integrity of certain nodes of corticocerebellar systems to perform these verbal and spatial working memory tasks, but that the specific pattern of relationships differed by group. The resulting brain structure–function patterns provide correlational support that components of this corticocerebellar system not typically related to normal performance in dual-task conditions may be available to augment otherwise dampened performance by alcoholics. PMID:20410871

  7. Divergent short- and long-term effects of acute stress in object recognition memory are mediated by endogenous opioid system activation.

    PubMed

    Nava-Mesa, Mauricio O; Lamprea, Marisol R; Múnera, Alejandro

    2013-11-01

    Acute stress induces short-term object recognition memory impairment and elicits endogenous opioid system activation. The aim of this study was thus to evaluate whether opiate system activation mediates the acute stress-induced object recognition memory changes. Adult male Wistar rats were trained in an object recognition task designed to test both short- and long-term memory. Subjects were randomly assigned to receive an intraperitoneal injection of saline, 1 mg/kg naltrexone or 3 mg/kg naltrexone, four and a half hours before the sample trial. Five minutes after the injection, half the subjects were submitted to movement restraint during four hours while the other half remained in their home cages. Non-stressed subjects receiving saline (control) performed adequately during the short-term memory test, while stressed subjects receiving saline displayed impaired performance. Naltrexone prevented such deleterious effect, in spite of the fact that it had no intrinsic effect on short-term object recognition memory. Stressed subjects receiving saline and non-stressed subjects receiving naltrexone performed adequately during the long-term memory test; however, control subjects as well as stressed subjects receiving a high dose of naltrexone performed poorly. Control subjects' dissociated performance during both memory tests suggests that the short-term memory test induced a retroactive interference effect mediated through light opioid system activation; such effect was prevented either by low dose naltrexone administration or by strongly activating the opioid system through acute stress. Both short-term memory retrieval impairment and long-term memory improvement observed in stressed subjects may have been mediated through strong opioid system activation, since they were prevented by high dose naltrexone administration. Therefore, the activation of the opioid system plays a dual modulating role in object recognition memory. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. A review of emerging non-volatile memory (NVM) technologies and applications

    NASA Astrophysics Data System (ADS)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  9. Static power reduction for midpoint-terminated busses

    DOEpatents

    Coteus, Paul W [Yorktown Heights, NY; Takken, Todd [Brewster, NY

    2011-01-18

    A memory system is disclosed which is comprised of a memory controller and addressable memory devices such as DRAMs. The invention provides a programmable register to control the high vs. low drive state of each bit of a memory system address and control bus during periods of bus inactivity. In this way, termination voltage supply current can be minimized, while permitting selected bus bits to be driven to a required state. This minimizes termination power dissipation while not affecting memory system performance. The technique can be extended to work for other high-speed busses as well.

  10. Collective input/output under memory constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yin; Chen, Yong; Zhuang, Yu

    2014-12-18

    Compared with current high-performance computing (HPC) systems, exascale systems are expected to have much less memory per node, which can significantly reduce necessary collective input/output (I/O) performance. In this study, we introduce a memory-conscious collective I/O strategy that takes into account memory capacity and bandwidth constraints. The new strategy restricts aggregation data traffic within disjointed subgroups, coordinates I/O accesses in intranode and internode layers, and determines I/O aggregators at run time considering memory consumption among processes. We have prototyped the design and evaluated it with commonly used benchmarks to verify its potential. The evaluation results demonstrate that this strategy holdsmore » promise in mitigating the memory pressure, alleviating the contention for memory bandwidth, and improving the I/O performance for projected extreme-scale systems. Given the importance of supporting increasingly data-intensive workloads and projected memory constraints on increasingly larger scale HPC systems, this new memory-conscious collective I/O can have a significant positive impact on scientific discovery productivity.« less

  11. FPGA cluster for high-performance AO real-time control system

    NASA Astrophysics Data System (ADS)

    Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.

    2006-06-01

    Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.

  12. High-strain slide-ring shape-memory polycaprolactone-based polyurethane.

    PubMed

    Wu, Ruiqing; Lai, Jingjuan; Pan, Yi; Zheng, Zhaohui; Ding, Xiaobin

    2018-06-06

    To enable shape-memory polymer networks to achieve recoverable high deformability with a simultaneous high shape-fixity ratio and shape-recovery ratio, novel semi-crystalline slide-ring shape-memory polycaprolactone-based polyurethane (SR-SMPCLU) with movable net-points constructed by a topologically interlocked slide-ring structure was designed and fabricated. The SR-SMPCLU not only exhibited good shape fixity, almost complete shape recovery, and a fast shape-recovery speed, it also showed an outstanding recoverable high-strain capacity with 95.83% Rr under a deformation strain of 1410% due to the pulley effect of the topological slide-ring structure. Furthermore, the SR-SMPCLU system maintained excellent shape-memory performance with increasing the training cycle numbers at 45% and even 280% deformation strain. The effects of the slide-ring cross-linker content, deformation strain, and successive shape-memory cycles on the shape-memory performance were investigated. A possible mechanism for the shape-memory effect of the SR-SMPCLU system is proposed.

  13. Single-pass memory system evaluation for multiprogramming workloads

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1990-01-01

    Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.

  14. Team performance in networked supervisory control of unmanned air vehicles: effects of automation, working memory, and communication content.

    PubMed

    McKendrick, Ryan; Shaw, Tyler; de Visser, Ewart; Saqer, Haneen; Kidwell, Brian; Parasuraman, Raja

    2014-05-01

    Assess team performance within a net-worked supervisory control setting while manipulating automated decision aids and monitoring team communication and working memory ability. Networked systems such as multi-unmanned air vehicle (UAV) supervision have complex properties that make prediction of human-system performance difficult. Automated decision aid can provide valuable information to operators, individual abilities can limit or facilitate team performance, and team communication patterns can alter how effectively individuals work together. We hypothesized that reliable automation, higher working memory capacity, and increased communication rates of task-relevant information would offset performance decrements attributed to high task load. Two-person teams performed a simulated air defense task with two levels of task load and three levels of automated aid reliability. Teams communicated and received decision aid messages via chat window text messages. Task Load x Automation effects were significant across all performance measures. Reliable automation limited the decline in team performance with increasing task load. Average team spatial working memory was a stronger predictor than other measures of team working memory. Frequency of team rapport and enemy location communications positively related to team performance, and word count was negatively related to team performance. Reliable decision aiding mitigated team performance decline during increased task load during multi-UAV supervisory control. Team spatial working memory, communication of spatial information, and team rapport predicted team success. An automated decision aid can improve team performance under high task load. Assessment of spatial working memory and the communication of task-relevant information can help in operator and team selection in supervisory control systems.

  15. Hypothalamic-pituitary-adrenal axis reactivity to psychological stress and memory in middle-aged women: high responders exhibit enhanced declarative memory performance.

    PubMed

    Domes, G; Heinrichs, M; Reichwald, U; Hautzinger, M

    2002-10-01

    According to recent studies, elevated cortisol levels are associated with impaired declarative memory performance. This specific effect of cortisol has been shown in several studies using pharmacological doses of cortisol. The present study was designed to determine the effects of endogenously stimulated cortisol secretion on memory performance in healthy middle-aged women. For psychological stress challenging, we employed the Trier Social Stress Test (TSST). Subjects were assigned to either the TSST or a non-stressful control condition. Declarative and non-declarative memory performance was measured by a combined priming-free-recall-task. No significant group differences were found for memory performance. Post hoc analyses of variance indicated that regardless of experimental condition the subjects with remarkably high cortisol increase in response to the experimental procedure (high responders) showed increased memory performance in the declarative task compared to subjects with low cortisol response (low responders). The results suggest that stress-induced cortisol failed to impair memory performance. The results are discussed with respect to gender-specific effects and modulatory effects of the sympathetic nervous system and psychological variables. Copyright 2002 Elsevier Science Ltd.

  16. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  17. A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory

    NASA Astrophysics Data System (ADS)

    Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin

    2015-09-01

    Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.

  18. A general model for memory interference in a multiprocessor system with memory hierarchy

    NASA Technical Reports Server (NTRS)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  19. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  20. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  1. Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    ) and longer-term (/projects) storage. These file systems are mounted on all nodes. Peregrine has three -2670 Xeon processors and 64 GB of memory. In addition to mounting the /home, /nopt, /projects and # cores/node Memory/node Peak (DP) performance per node 88 Intel Xeon E5-2670 "Sandy Bridge" 8

  2. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less

  3. [Selective alteration of the declarative memory systems in patients treated with a high number of electroconvulsive therapy sessions].

    PubMed

    Rami-González, L; Boget-Llucià, T; Bernardo, M; Marcos, T; Cañizares-Alejos, S; Penadés, R; Portella, M J; Castelví, M; Raspall, T; Salamero, M

    The reversible electrochemical effects of electroconvulsive therapy (ECT) on specific areas of the brain enable the neuroanatomical bases of some cognitive functions to be studied. In research carried out on memory systems, a selective alteration of the declarative ones has been observed after treatment with ECT. Little work has been done to explore the differential alteration of the memory subsystems in patients with a high number of ECT sessions. AIM. To study the declarative and non declarative memory system in psychiatric patients submitted to maintenance ECT treatment, with a high number of previous ECT sessions. 20 patients submitted to treatment with ECT (10 diagnosed as having depression and 10 with schizophrenia) and 20 controls, who were paired by age, sex and psychopathological diagnosis. For the evaluation of the declarative memory system, the Wechsler Memory Scale (WMS) logical memory test was used. The Hanoi Tower procedural test was employed to evaluate the non declarative system. Patients treated with ECT performed worse in the WMS logical memory test, but this was only significant in patients diagnosed as suffering from depression. No significant differences were observed in the Hanoi Tower test. A selective alteration of the declarative systems was observed in patients who had been treated with a high number of ECT sessions, while the non declarative memory systems remain unaffected.

  4. Transistor and memory devices based on novel organic and biomaterials

    NASA Astrophysics Data System (ADS)

    Tseng, Jia-Hung

    Organic semiconductor devices have aroused considerable interest because of the enormous potential in many technological applications. Organic electroluminescent devices have been extensively applied in display technology. Rapid progress has also been made in transistor and memory devices. This thesis considers aspects of the transistor based on novel organic single crystals and memory devices using hybrid nanocomposites comprising polymeric/inorganic nanoparticles, and biomolecule/quantum dots. Organic single crystals represent highly ordered structures with much less imperfections compared to amorphous thin films for probing the intrinsic charge transport in transistor devices. We demonstrate that free-standing, thin organic single crystals with natural flexing ability can be fabricated as flexible transistors. We study the surface properties of the organic crystals to determine a nearly perfect surface leading to high performance transistors. The flexible transistors can maintain high performance under reversible bending conditions. Because of the high quality crystal technique, we further develop applications on organic complementary circuits and organic single crystal photovoltaics. In the second part, two aspects of memory devices are studied. We examine the charge transfer process between conjugated polymers and metal nanoparticles. This charge transfer process is essential for the conductance switching in nanoseconds to induce the memory effect. Under the reduction condition, the charge transfer process is eliminated as well as the memory effect, raising the importance of coupling between conjugated systems and nanoparticle accepters. The other aspect of memory devices focuses on the interaction of virus biomolecules with quantum dots or metal nanoparticles in the devices. We investigate the impact of memory function on the hybrid bio-inorganic system. We perform an experimental analysis of the charge storage activation energy in tobacco mosaic virus with platinum nanoparticles. It is established that the effective barrier height in the materials systems needs to be further engineered in order to have sufficiently long retention times. Finally other novel architectures such as negative differential resistance devices and high density memory arrays are investigated for their influence on memory technology.

  5. Research on Optical Transmitter and Receiver Module Used for High-Speed Interconnection between CPU and Memory

    NASA Astrophysics Data System (ADS)

    He, Huimin; Liu, Fengman; Li, Baoxia; Xue, Haiyun; Wang, Haidong; Qiu, Delong; Zhou, Yunyan; Cao, Liqiang

    2016-11-01

    With the development of the multicore processor, the bandwidth and capacity of the memory, rather than the memory area, are the key factors in server performance. At present, however, the new architectures, such as fully buffered DIMM (FBDIMM), hybrid memory cube (HMC), and high bandwidth memory (HBM), cannot be commercially applied in the server. Therefore, a new architecture for the server is proposed. CPU and memory are separated onto different boards, and optical interconnection is used for the communication between them. Each optical module corresponds to each dual inline memory module (DIMM) with 64 channels. Compared to the previous technology, not only can the architecture realize high-capacity and wide-bandwidth memory, it also can reduce power consumption and cost, and be compatible with the existing dynamic random access memory (DRAM). In this article, the proposed module with system-in-package (SiP) integration is demonstrated. In the optical module, the silicon photonic chip is included, which is a promising technology to be applied in the next-generation data exchanging centers. And due to the bandwidth-distance performance of the optical interconnection, SerDes chips are introduced to convert the 64-bit data at 800 Mbps from/to 4-channel data at 12.8 Gbps after/before they are transmitted though optical fiber. All the devices are packaged on cheap organic substrates. To ensure the performance of the whole system, several optimization efforts have been performed on the two modules. High-speed interconnection traces have been designed and simulated with electromagnetic simulation software. Steady-state thermal characteristics of the transceiver module have been evaluated by ANSYS APLD based on finite-element methodology (FEM). Heat sinks are placed at the hotspot area to ensure the reliability of all working chips. Finally, this transceiver system based on silicon photonics is measured, and the eye diagrams of data and clock signals are verified.

  6. A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald D.; Bailey, David H.

    1987-01-01

    A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.

  7. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  8. SODR Memory Control Buffer Control ASIC

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  9. Estradiol concentrations and working memory performance in women of reproductive age.

    PubMed

    Hampson, Elizabeth; Morley, Erin E

    2013-12-01

    Estrogen has been proposed to exert a regulatory influence on the working memory system via actions in the female prefrontal cortex. Tests of this hypothesis have been limited almost exclusively to postmenopausal women and pharmacological interventions. We explored whether estradiol discernibly influences working memory within the natural range of variation in concentrations characteristic of the menstrual cycle. The performance of healthy women (n=39) not using hormonal contraceptives, and a control group of age- and education-matched men (n=31), was compared on a spatial working memory task. Cognitive testing was done blind to ovarian status. Women were retrospectively classified into low- or high-estradiol groups based on the results of radioimmunoassays of saliva collected immediately before and after the cognitive testing. Women with higher levels of circulating estradiol made significantly fewer errors on the working memory task than women tested under low estradiol. Pearson's correlations showed that the level of salivary estradiol but not progesterone was correlated inversely with the number of working memory errors produced. Women tested at high levels of circulating estradiol tended to be more accurate than men. Superior performance by the high estradiol group was seen on the working memory task but not on two control tasks, indicating selectivity of the effects. Consistent with previous studies of postmenopausal women, higher levels of circulating estradiol were associated with better working memory performance. These results add further support to the hypothesis that the working memory system is modulated by estradiol in women, and show that the effects can be observed under non-pharmacological conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Applications considerations in the system design of highly concurrent multiprocessors

    NASA Technical Reports Server (NTRS)

    Lundstrom, Stephen F.

    1987-01-01

    A flow model processor approach to parallel processing is described, using very-high-performance individual processors, high-speed circuit switched interconnection networks, and a high-speed synchronization capability to minimize the effect of the inherently serial portions of applications on performance. Design studies related to the determination of the number of processors, the memory organization, and the structure of the networks used to interconnect the processor and memory resources are discussed. Simulations indicate that applications centered on the large shared data memory should be able to sustain over 500 million floating point operations per second.

  11. Multilevel resistive information storage and retrieval

    DOEpatents

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  12. Subthalamic stimulation differentially modulates declarative and nondeclarative memory.

    PubMed

    Hälbig, Thomas D; Gruber, Doreen; Kopp, Ute A; Scherer, Peter; Schneider, Gerd-Helge; Trottenberg, Thomas; Arnold, Guy; Kupsch, Andreas

    2004-03-01

    Declarative memory has been reported to rely on the medial temporal lobe system, whereas non-declarative memory depends on basal ganglia structures. We investigated the functional role of the subthalamic nucleus (STN), a structure closely connected with the basal ganglia for both types of memory. Via deep brain high frequency stimulation (DBS) we manipulated neural activity of the STN in humans. We found that DBS-STN differentially modulated memory performance: declarative memory was impaired, whereas non-declarative memory was improved in the presence of STN-DBS indicating a specific role of the STN in the activation of memory systems. Copyright 2004 Lippincott Williams & Wilkins

  13. Interference effects of vocalization on dual task performance

    NASA Astrophysics Data System (ADS)

    Owens, J. M.; Goodman, L. S.; Pianka, M. J.

    1984-09-01

    Voice command and control systems have been proposed as a potential means of off-loading the typically overburdened visual information processing system. However, prior to introducing novel human-machine interfacing technologies in high workload environments, consideration must be given to the integration of the new technologists within existing task structures to ensure that no new sources of workload or interference are systematically introduced. This study examined the use of voice interactive systems technology in the joint performance of two cognitive information processing tasks requiring continuous memory and choice reaction wherein a basis for intertask interference might be expected. Stimuli for the continuous memory task were presented aurally and either voice or keyboard responding was required in the choice reaction task. Performance was significantly degraded in each task when voice responding was required in the choice reaction time task. Performance degradation was evident in higher error scores for both the choice reaction and continuous memory tasks. Performance decrements observed under conditions of high intertask stimulus similarity were not statistically significant. The results signal the need to consider further the task requirements for verbal short-term memory when applying speech technology in multitask environments.

  14. Short-term memory and working memory in children with blindness: support for a domain general or domain specific system?

    PubMed

    Swanson, H Lee; Luxenberg, Diana

    2009-05-01

    The study explored the contribution of two component processes (phonological and executive) to blind children's memory performance. Children with blindness and sight were matched on gender, chronological age, and verbal intelligence and compared on measures of short-term memory (STM) and working memory (WM). Although the measures were highly correlated, the results from two experiments indicated that the blind children were superior to sighted children on measures of STM, but not on measures of WM. The results supported the notion that children with blindness have advantages on memory tasks that draw upon resources from the phonological loop. However, comparable performance between the ability groups on WM measures suggests there are domain specific aspects in the executive system.

  15. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  16. Memory for environmental sounds in sighted, congenitally blind and late blind adults: evidence for cross-modal compensation.

    PubMed

    Röder, Brigitte; Rösler, Frank

    2003-10-01

    Several recent reports suggest compensatory performance changes in blind individuals. It has, however, been argued that the lack of visual input leads to impoverished semantic networks resulting in the use of data-driven rather than conceptual encoding strategies on memory tasks. To test this hypothesis, congenitally blind and sighted participants encoded environmental sounds either physically or semantically. In the recognition phase, both conceptually as well as physically distinct and physically distinct but conceptually highly related lures were intermixed with the environmental sounds encountered during study. Participants indicated whether or not they had heard a sound in the study phase. Congenitally blind adults showed elevated memory both after physical and semantic encoding. After physical encoding blind participants had lower false memory rates than sighted participants, whereas the false memory rates of sighted and blind participants did not differ after semantic encoding. In order to address the question if compensatory changes in memory skills are restricted to critical periods during early childhood, late blind adults were tested with the same paradigm. When matched for age, they showed similarly high memory scores as the congenitally blind. These results demonstrate compensatory performance changes in long-term memory functions due to the loss of a sensory system and provide evidence for high adaptive capabilities of the human cognitive system.

  17. LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzak, Jakub; Luszczek, Pitior; Faverge, Mathieu

    2012-03-01

    LU factorization with partial pivoting is a canonical numerical procedure and the main component of the High Performance LINPACK benchmark. This article presents an implementation of the algorithm for a hybrid, shared memory, system with standard CPU cores and GPU accelerators. Performance in excess of one TeraFLOPS is achieved using four AMD Magny Cours CPUs and four NVIDIA Fermi GPUs.

  18. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  19. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  20. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  1. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  2. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  3. Nonvolatile memory chips: critical technology for high-performance recce systems

    NASA Astrophysics Data System (ADS)

    Kaufman, Bruce

    2000-11-01

    Airborne recce systems universally require nonvolatile storage of recorded data. Both present and next generation designs make use of flash memory chips. Flash memory devices are in high volume use for a variety of commercial products ranging form cellular phones to digital cameras. Fortunately, commercial applications call for increasing capacities and fast write times. These parameters are important to the designer of recce recorders. Of economic necessity COTS devices are used in recorders that must perform in military avionics environments. Concurrently, recording rates are moving to $GTR10Gb/S. Thus to capture imagery for even a few minutes of record time, tactically meaningful solid state recorders will require storage capacities in the 100s of Gbytes. Even with memory chip densities at present day 512Mb, such capacities require thousands of chips. The demands on packaging technology are daunting. This paper will consider the differing flash chip architectures, both available and projected and discuss the impact on recorder architecture and performance. Emerging nonvolatile memory technologies, FeRAM AND MIRAM will be reviewed with regard to their potential use in recce recorders.

  4. Photonic content-addressable memory system that uses a parallel-readout optical disk

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.

    1995-11-01

    We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.

  5. Large-scale network integration in the human brain tracks temporal fluctuations in memory encoding performance.

    PubMed

    Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi

    2018-06-18

    Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.

  6. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  7. Transactive memory in organizational groups: the effects of content, consensus, specialization, and accuracy on group performance.

    PubMed

    Austin, John R

    2003-10-01

    Previous research on transactive memory has found a positive relationship between transactive memory system development and group performance in single project laboratory and ad hoc groups. Closely related research on shared mental models and expertise recognition supports these findings. In this study, the author examined the relationship between transactive memory systems and performance in mature, continuing groups. A group's transactive memory system, measured as a combination of knowledge stock, knowledge specialization, transactive memory consensus, and transactive memory accuracy, is positively related to group goal performance, external group evaluations, and internal group evaluations. The positive relationship with group performance was found to hold for both task and external relationship transactive memory systems.

  8. Intact haptic priming in normal aging and Alzheimer's disease: evidence for dissociable memory systems.

    PubMed

    Ballesteros, Soledad; Reales, José Manuel

    2004-01-01

    This study is the first to report complete priming in Alzheimer's disease (AD) patients and older control subjects for objects presented haptically. To investigate possible dissociations between implicit and explicit objects representations, young adults, Alzheimer's patients, and older controls performed a speeded object naming task followed by a recognition task. Similar haptic priming was exhibited by the three groups, although young adults responded faster than the two older groups. Furthermore, there was no difference in performance between the two healthy groups. On the other hand, younger and older healthy adults did not differ on explicit recognition while, as expected, AD patients were highly impaired. The double dissociation suggests that different memory systems mediate both types of memory tasks. The preservation of intact haptic priming in AD provides strong support to the idea that object implicit memory is mediated by a memory system that is different from the medial-temporal diencephalic system underlying explicit memory, which is impaired early in AD. Recent imaging and behavioral studies suggest that the implicit memory system may depend on extrastriate areas of the occipital cortex although somatosensory cortical mechanisms may also be involved.

  9. Repeated application of Modafinil and Levodopa reveals a drug-independent precise timing of spatial working memory modulation.

    PubMed

    Bezu, M; Shanmugasundaram, B; Lubec, G; Korz, V

    2016-10-01

    Cognition enhancing drugs often target the dopaminergic system, which is involved in learning and memory, including working memory that in turn involves mainly the prefrontal cortex and the hippocampus. In most animal models for modulations of working memory animals are pre-trained to a certain criterion and treated then acutely to test drugs effects on working memory. Thus, little is known regarding subchronic or chronic application of cognition enhancing drugs and working memory performance. Therefore we trained male rats over six days in a rewarded alternation test in a T-maze. Rats received daily injections of either modafinil or Levodopa (L-Dopa) at a lower and a higher dose 30min before training. Levodopa but not modafinil increased working memory performance during early training significantly at day 3 when compared to vehicle controls. Both drugs induced dose dependent differences in working memory with significantly better performance at low doses compared to high doses for modafinil, in contrast to L-Dopa where high dose treated rats performed better than low dose rats. Strikingly, these effects appeared only at day 3 for both drugs, followed by a decline in behavioral performance. Thus, a critical drug independent time window for dopaminergic effects upon working memory could be revealed. Evaluating the underlying mechanisms contributes to the understanding of temporal effects of dopamine on working memory performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.

  11. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  12. Challenges of Future High-End Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David; Kutler, Paul (Technical Monitor)

    1998-01-01

    The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.

  13. Toshiba TDF-500 High Resolution Viewing And Analysis System

    NASA Astrophysics Data System (ADS)

    Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.

    1988-06-01

    A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.

  14. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  15. Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing

    NASA Astrophysics Data System (ADS)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Wang, Feng

    2018-04-01

    Fog computing requires a large main memory capacity to decrease latency and increase the Quality of Service (QoS). However, dynamic random access memory (DRAM), the commonly used random access memory, cannot be included into a fog computing system due to its high consumption of power. In recent years, non-volatile memories (NVM) such as Phase-Change Memory (PCM) and Spin-transfer torque RAM (STT-RAM) with their low power consumption have emerged to replace DRAM. Moreover, the currently proposed hybrid main memory, consisting of both DRAM and NVM, have shown promising advantages in terms of scalability and power consumption. However, the drawbacks of NVM, such as long read/write latency give rise to potential problems leading to asymmetric cache misses in the hybrid main memory. Current last level cache (LLC) policies are based on the unified miss cost, and result in poor performance in LLC and add to the cost of using NVM. In order to minimize the cache miss cost in the hybrid main memory, we propose a cost aware cache replacement policy (CACRP) that reduces the number of cache misses from NVM and improves the cache performance for a hybrid memory system. Experimental results show that our CACRP behaves better in LLC performance, improving performance up to 43.6% (15.5% on average) compared to LRU.

  16. YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste

    Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less

  17. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Keren

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less

  18. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  19. NRL Fact Book 2010

    DTIC Science & Technology

    2010-01-01

    service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services

  20. Stimulation of Hippocampal Adenylyl Cyclase Activity Dissociates Memory Consolidation Processes for Response and Place Learning

    ERIC Educational Resources Information Center

    Martel, Guillaume; Millard, Annabelle; Jaffard, Robert; Guillou, Jean-Louis

    2006-01-01

    Procedural and declarative memory systems are postulated to interact in either a synergistic or a competitive manner, and memory consolidation appears to be a highly critical stage for this process. However, the precise cellular mechanisms subserving these interactions remain unknown. To investigate this issue, 24-h retention performances were…

  1. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  2. Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Palermo, Daniel J.; Banerjee, Prithviraj

    1995-01-01

    For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.

  3. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  4. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  5. Optoelectronic-cache memory system architecture.

    PubMed

    Chiarulli, D M; Levitan, S P

    1996-05-10

    We present an investigation of the architecture of an optoelectronic cache that can integrate terabit optical memories with the electronic caches associated with high-performance uniprocessors and multiprocessors. The use of optoelectronic-cache memories enables these terabit technologies to provide transparently low-latency secondary memory with frame sizes comparable with disk pages but with latencies that approach those of electronic secondary-cache memories. This enables the implementation of terabit memories with effective access times comparable with the cycle times of current microprocessors. The cache design is based on the use of a smart-pixel array and combines parallel free-space optical input-output to-and-from optical memory with conventional electronic communication to the processor caches. This cache and the optical memory system to which it will interface provide a large random-access memory space that has a lower overall latency than that of magnetic disks and disk arrays. In addition, as a consequence of the high-bandwidth parallel input-output capabilities of optical memories, fault service times for the optoelectronic cache are substantially less than those currently achievable with any rotational media.

  6. Naval Research Laboratory Fact Book 2012

    DTIC Science & Technology

    2012-11-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion

  7. High-throughput state-machine replication using software transactional memory.

    PubMed

    Zhao, Wenbing; Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2016-11-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload.

  8. High-throughput state-machine replication using software transactional memory

    PubMed Central

    Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2017-01-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload. PMID:29075049

  9. Performance measurements of the first RAID prototype

    NASA Technical Reports Server (NTRS)

    Chervenak, Ann L.

    1990-01-01

    The performance is examined of Redundant Arrays of Inexpensive Disks (RAID) the First, a prototype disk array. A hierarchy of bottlenecks was discovered in the system that limit overall performance. The most serious is the memory system contention on the Sun 4/280 host CPU, which limits array bandwidth to 2.3 MBytes/sec. The array performs more successfully on small random operations, achieving nearly 300 I/Os per second before the Sun 4/280 becomes CPU limited. Other bottlenecks in the system are the VME backplane, bandwidth on the disk controller, and overheads associated with the SCSI protocol. All are examined in detail. The main conclusion is that to achieve the potential bandwidth of arrays, more powerful CPU's alone will not suffice. Just as important are adequate host memory bandwidth and support for high bandwidth on disk controllers. Current disk controllers are more often designed to achieve large numbers of small random operations, rather than high bandwidth. Operating systems also need to change to support high bandwidth from disk arrays. In particular, they should transfer data in larger blocks, and should support asynchronous I/O to improve sequential write performance.

  10. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    CPUs with 48GB of memory. Node 04 has dual Intel Xeon E5530 CPUs with 24GB of memory. Nodes 05-20 have dual AMD Opteron 2374 HE CPUs with 16GB of memory. Nodes 21-30 have been decommissioned. Nodes 31-35 have dual Intel Xeon X5675 CPUs with 48GB of memory. Nodes 36-37 have dual Intel Xeon E5-2680 CPUs with

  11. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  12. A high performance parallel algorithm for 1-D FFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, R.C.; Gustavson, F.G.; Zubair, M.

    1994-12-31

    In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less

  13. Enhancing effects of nicotine and impairing effects of scopolamine on distinct aspects of performance in computerized attention and working memory tasks in marmoset monkeys.

    PubMed

    Spinelli, Simona; Ballard, Theresa; Feldon, Joram; Higgins, Guy A; Pryce, Christopher R

    2006-08-01

    With the CAmbridge Neuropsychological Test Automated Battery (CANTAB), computerized neuropsychological tasks can be presented on a touch-sensitive computer screen, and this system has been used to assess cognitive processes in neuropsychiatric patients, healthy volunteers, and species of non-human primate, primarily the rhesus macaque and common marmoset. Recently, we reported that the common marmoset, a small-bodied primate, can be trained to a high and stable level of performance on the CANTAB five-choice serial reaction time (5-CSRT) task of attention, and a novel task of working memory, the concurrent delayed match-to-position (CDMP) task. Here, in order to increase understanding of the specific cognitive demands of these tasks and the importance of acetylcholine to their performance, the effects of systemic delivery of the muscarinic receptor antagonist scopolamine and the nicotinic receptor agonist nicotine were studied. In the 5-CSRT task, nicotine enhanced performance in terms of increased sustained attention, whilst scopolamine led to increased omissions despite a high level of orientation to the correct stimulus location. In the CDMP task, scopolamine impaired performance at two stages of the task that differ moderately in terms of memory retention load but both of which are likely to require working memory, including interference-coping, abilities. Nicotine tended to enhance performance at the long-delay stage specifically but only against a background of relatively low baseline performance. These data are consistent with a dissociation of the roles of muscarinic and nicotinic cholinergic receptors in the regulation of both sustained attention and working memory in primates.

  14. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  15. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  16. Evolution of cellular automata with memory: The Density Classification Task.

    PubMed

    Stone, Christopher; Bull, Larry

    2009-08-01

    The Density Classification Task is a well known test problem for two-state discrete dynamical systems. For many years researchers have used a variety of evolutionary computation approaches to evolve solutions to this problem. In this paper, we investigate the evolvability of solutions when the underlying Cellular Automaton is augmented with a type of memory based on the Least Mean Square algorithm. To obtain high performance solutions using a simple non-hybrid genetic algorithm, we design a novel representation based on the ternary representation used for Learning Classifier Systems. The new representation is found able to produce superior performance to the bit string traditionally used for representing Cellular automata. Moreover, memory is shown to improve evolvability of solutions and appropriate memory settings are able to be evolved as a component part of these solutions.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less

  18. Preliminary basic performance analysis of the Cedar multiprocessor memory system

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Turner, S.; Veidenbaum, A.; Wijshoff, H.

    1991-01-01

    Some preliminary basic results on the performance of the Cedar multiprocessor memory system are presented. Empirical results are presented and used to calibrate a memory system simulator which is then used to discuss the scalability of the system.

  19. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    PubMed

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  20. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  1. rTMS on left prefrontal cortex contributes to memories for positive emotional cues: a comparison between pictures and words.

    PubMed

    Balconi, M; Cobelli, C

    2015-02-26

    The present research explored the cortical correlates of emotional memories in response to words and pictures. Subjects' performance (Accuracy Index, AI; response times, RTs; RTs/AI) was considered when a repetitive Transcranial Magnetic Stimulation (rTMS) was applied on the left dorsolateral prefrontal cortex (LDLPFC). Specifically, the role of LDLPFC was tested by performing a memory task, in which old (previously encoded targets) and new (previously not encoded distractors) emotional pictures/words had to be recognized. Valence (positive vs. negative) and arousing power (high vs. low) of stimuli were also modulated. Moreover, subjective evaluation of emotional stimuli in terms of valence/arousal was explored. We found significant performance improving (higher AI, reduced RTs, improved general performance) in response to rTMS. This "better recognition effect" was only related to specific emotional features, that is positive high arousal pictures or words. Moreover no significant differences were found between stimulus categories. A direct relationship was also observed between subjective evaluation of emotional cues and memory performance when rTMS was applied to LDLPFC. Supported by valence and approach model of emotions, we supposed that a left lateralized prefrontal system may induce a better recognition of positive high arousal words, and that evaluation of emotional cue is related to prefrontal activation, affecting the recognition memories of emotions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  3. Sawmill: A Logging File System for a High-Performance RAID Disk Array

    DTIC Science & Technology

    1995-01-01

    from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the

  4. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  5. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  6. Working memory subsystems and task complexity in young boys with Fragile X syndrome.

    PubMed

    Baker, S; Hooper, S; Skinner, M; Hatton, D; Schaaf, J; Ornstein, P; Bailey, D

    2011-01-01

    Working memory problems have been targeted as core deficits in individuals with Fragile X syndrome (FXS); however, there have been few studies that have examined working memory in young boys with FXS, and even fewer studies that have studied the working memory performance of young boys with FXS across different degrees of complexity. The purpose of this study was to investigate the phonological loop and visual-spatial working memory in young boys with FXS, in comparison to mental age-matched typical boys, and to examine the impact of complexity of the working memory tasks on performance. The performance of young boys (7 to 13-years-old) with FXS (n = 40) was compared with that of mental age and race matched typically developing boys (n = 40) on measures designed to test the phonological loop and the visuospatial sketchpad across low, moderate and high degrees of complexity. Multivariate analyses were used to examine group differences across the specific working memory systems and degrees of complexity. Results suggested that boys with FXS showed deficits in phonological loop and visual-spatial working memory tasks when compared with typically developing mental age-matched boys. For the boys with FXS, the phonological loop was significantly lower than the visual-spatial sketchpad; however, there was no significant difference in performance across the low, moderate and high degrees of complexity in the working memory tasks. Reverse tasks from both the phonological loop and visual-spatial sketchpad appeared to be the most challenging for both groups, but particularly for the boys with FXS. These findings implicate a generalised deficit in working memory in young boys with FXS, with a specific disproportionate impairment in the phonological loop. Given the lack of differentiation on the low versus high complexity tasks, simple span tasks may provide an adequate estimate of working memory until greater involvement of the central executive is achieved. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  7. Working memory subsystems and task complexity in young boys with Fragile X syndrome

    PubMed Central

    Baker, S.; Hooper, S.; Skinner, M.; Hatton, D.; Schaaf, J.; Ornstein, P.; Bailey, D.

    2011-01-01

    Background Working memory problems have been targeted as core deficits in individuals with Fragile X syndrome (FXS); however, there have been few studies that have examined working memory in young boys with FXS, and even fewer studies that have studied the working memory performance of young boys with FXS across different degrees of complexity. The purpose of this study was to investigate the phonological loop and visual–spatial working memory in young boys with FXS, in comparison to mental age-matched typical boys, and to examine the impact of complexity of the working memory tasks on performance. Methods The performance of young boys (7 to 13-years-old) with FXS (n = 40) was compared with that of mental age and race matched typically developing boys (n = 40) on measures designed to test the phonological loop and the visuospatial sketchpad across low, moderate and high degrees of complexity. Multivariate analyses were used to examine group differences across the specific working memory systems and degrees of complexity. Results Results suggested that boys with FXS showed deficits in phonological loop and visual–spatial working memory tasks when compared with typically developing mental age-matched boys. For the boys with FXS, the phonological loop was significantly lower than the visual–spatial sketchpad; however, there was no significant difference in performance across the low, moderate and high degrees of complexity in the working memory tasks. Reverse tasks from both the phonological loop and visual–spatial sketchpad appeared to be the most challenging for both groups, but particularly for the boys with FXS. Conclusions These findings implicate a generalised deficit in working memory in young boys with FXS, with a specific disproportionate impairment in the phonological loop. Given the lack of differentiation on the low versus high complexity tasks, simple span tasks may provide an adequate estimate of working memory until greater involvement of the central executive is achieved. PMID:21121991

  8. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  9. A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement

    PubMed Central

    Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2016-01-01

    Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates. PMID:26763827

  10. A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement.

    PubMed

    Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2016-01-01

    Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates.

  11. Performance analysis and kernel size study of the Lynx real-time operating system

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.

    1993-01-01

    This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.

  12. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  13. Effects of prenatal methamphetamine exposure on verbal memory revealed with fMRI

    PubMed Central

    Lu, Lisa H.; Johnson, Arianne; O’Hare, Elizabeth D.; Bookheimer, Susan Y.; Smith, Lynne M.; O’Connor, Mary J.; Sowell, Elizabeth R.

    2009-01-01

    Objective Efforts to understand specific effects of prenatal methamphetamine exposure on cognitive processing are hampered by high rates of concomitant alcohol use during pregnancy. We examined whether neurocognitive systems differed among children with differing prenatal teratogenic exposures when they engaged in a verbal memory task. Patients and Methods Participants (7-15 years old) engaged in a verbal paired associate learning task while undergoing functional magnetic resonance imaging. The MA group included 14 children with prenatal methamphetamine exposure, 12 of whom had concomitant alcohol exposure. They were compared to 9 children with prenatal alcohol but not methamphetamine exposure (ALC) and 20 unexposed controls (CON). Groups did not differ in age, gender, or socioeconomic status. Participants’ IQ and verbal learning performance were measured using standardized instruments. Results The MA group activated more diffuse brain regions, including bilateral medial temporal structures known to be important for memory, than both the ALC and the CON groups. These group differences remained after IQ was covaried. More activation in medial temporal structures by the MA group compared to the ALC group cannot be explained by performance differences because both groups performed at similar levels on the verbal memory task. Conclusions More diffuse activation in the MA group during verbal memory may reflect recruitment of compensatory systems to support a weak verbal memory network. Differences in activation patterns between the MA and ALC groups suggest that prenatal MA exposure influences the development of the verbal memory system above and beyond effects of prenatal alcohol exposure. PMID:19525715

  14. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  15. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  16. Enhancing quantum sensing sensitivity by a quantum memory

    PubMed Central

    Zaiser, Sebastian; Rendler, Torsten; Jakobi, Ingmar; Wolf, Thomas; Lee, Sang-Yun; Wagner, Samuel; Bergholm, Ville; Schulte-Herbrüggen, Thomas; Neumann, Philipp; Wrachtrup, Jörg

    2016-01-01

    In quantum sensing, precision is typically limited by the maximum time interval over which phase can be accumulated. Memories have been used to enhance this time interval beyond the coherence lifetime and thus gain precision. Here, we demonstrate that by using a quantum memory an increased sensitivity can also be achieved. To this end, we use entanglement in a hybrid spin system comprising a sensing and a memory qubit associated with a single nitrogen-vacancy centre in diamond. With the memory we retain the full quantum state even after coherence decay of the sensor, which enables coherent interaction with distinct weakly coupled nuclear spin qubits. We benchmark the performance of our hybrid quantum system against use of the sensing qubit alone by gradually increasing the entanglement of sensor and memory. We further apply this quantum sensor-memory pair for high-resolution NMR spectroscopy of single 13C nuclear spins. PMID:27506596

  17. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  18. An exploratory study of the effects of spatial working-memory load on prefrontal activation in low- and high-performing elderly.

    PubMed

    Vermeij, Anouk; van Beek, Arenda H E A; Reijs, Babette L R; Claassen, Jurgen A H R; Kessels, Roy P C

    2014-01-01

    Older adults show more bilateral prefrontal activation during cognitive performance than younger adults, who typically show unilateral activation. This over-recruitment has been interpreted as compensation for declining structure and function of the brain. Here we examined how the relationship between behavioral performance and prefrontal activation is modulated by different levels of working-memory load. Eighteen healthy older adults (70.8 ± 5.0 years; MMSE 29.3 ± 0.9) performed a spatial working-memory task (n-back). Oxygenated ([O2Hb]) and deoxygenated ([HHb]) hemoglobin concentration changes were registered by two functional Near-Infrared Spectroscopy (fNIRS) channels located over the left and right prefrontal cortex. Increased working-memory load resulted in worse performance compared to the control condition. [O2Hb] increased with rising working-memory load in both fNIRS channels. Based on the performance in the high working-memory load condition, the group was divided into low and high performers. A significant interaction effect of performance level and hemisphere on [O2Hb] increase was found, indicating that high performers were better able to keep the right prefrontal cortex engaged under high cognitive demand. Furthermore, in the low performers group, individuals with a larger decline in task performance from the control to the high working-memory load condition had a larger bilateral increase of [O2Hb]. The high performers did not show a correlation between performance decline and working-memory load related prefrontal activation changes. Thus, additional bilateral prefrontal activation in low performers did not necessarily result in better cognitive performance. Our study showed that bilateral prefrontal activation may not always be successfully compensatory. Individual behavioral performance should be taken into account to be able to distinguish successful and unsuccessful compensation or declined neural efficiency.

  19. NASA's 3D Flight Computer for Space Applications

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon

    2000-01-01

    The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).

  20. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  1. An ASIC memory buffer controller for a high speed disk system

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Campbell, Steve

    1993-01-01

    The need for large capacity, high speed mass memory storage devices has become increasingly evident at NASA during the past decade. High performance mass storage systems are crucial to present and future NASA systems. Spaceborne data storage system requirements have grown in response to the increasing amounts of data generated and processed by orbiting scientific experiments. Predictions indicate increases in the volume of data by orders of magnitude during the next decade. Current predictions are for storage capacities on the order of terabits (Tb), with data rates exceeding one gigabit per second (Gbps). As part of the design effort for a state of the art mass storage system, NASA Langley has designed a 144 CMOS ASIC to support high speed data transfers. This paper discusses the system architecture, ASIC design and some of the lessons learned in the development process.

  2. Method and apparatus for managing access to a memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less

  3. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations

    PubMed Central

    Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-01-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911

  4. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    PubMed

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  5. NRL Fact Book

    DTIC Science & Technology

    2008-01-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR

  6. The effects of GABAA and NMDA receptors in the shell-accumbens on spatial memory of METH-treated rats.

    PubMed

    Heysieattalab, Soomaayeh; Naghdi, Nasser; Zarrindast, Mohammad-Reza; Haghparast, Abbas; Mehr, Shahram Ejtemaei; Khoshbouei, Habibeh

    2016-03-01

    Methamphetamine (METH) is a highly addictive and neurotoxic psychostimulant. Its use in humans is often associated with neurocognitive impairment and deficits in hippocampal plasticity. Striatal dopamine system is one of the main targets of METH. The dopamine neurons in the striatum directly or indirectly regulate the GABA and glutamatergic signaling in this region and thus their outputs. This is consistent with previous reports showing modification of neuronal activity in the striatum modulates the expression of hippocampal LTP and hippocampal-dependent memory tasks such as Morris water maze (MWM). Therefore, reversing or preventing METH-induced synaptic modifications via pharmacological manipulations of the shell-nucleus accumbens (shell-NAc) may introduce a viable therapeutic target to attenuate the METH-induced memory deficits. This study is designed to investigate the role of intra-shell NAc manipulation of GABAA and NMDA receptors and their interaction with METH on memory performance in MWM task. Pharmacological manipulations were performed in rats received METH or saline. We found systemic saline plus intra-shell NAc infusions of muscimol dose-dependently impaired performance, while bicuculline had no effect. Surprisingly, the intra-NAc infusions of 0.005μg/rat muscimol that has no effect on memory performance (ineffective dose) prevented METH-induced memory impairment. In the contrary, the intra-NAc infusions of bicuculline (0.2μg/rat) increased METH-induced memory impairment. However, pre-training intra-NAc infusions of D-AP5 dose-dependently impaired performance, while NMDA had no effect in rats received systemic saline (control group). The intra-NAc infusions with an ineffective dose of NMDA (0.1μg/rat) increased METH-induced memory impairment. Furthermore, intra-NAc infusions of D-AP5 with an ineffective dose (0.1μg/rat) prevented METH-induced memory impairment. Our result is consistent with the interpretation that METH-mediated learning deficit might be due to modification of hippocampus-VTA loop and that augmentation of GABAA receptor function in the shell-NAc may provide a new therapeutic target for alleviating METH-induced memory deficits. Copyright © 2015. Published by Elsevier Inc.

  7. Non-Markovianity-assisted high-fidelity Deutsch-Jozsa algorithm in diamond

    NASA Astrophysics Data System (ADS)

    Dong, Yang; Zheng, Yu; Li, Shen; Li, Cong-Cong; Chen, Xiang-Dong; Guo, Guang-Can; Sun, Fang-Wen

    2018-01-01

    The memory effects in non-Markovian quantum dynamics can induce the revival of quantum coherence, which is believed to provide important physical resources for quantum information processing (QIP). However, no real quantum algorithms have been demonstrated with the help of such memory effects. Here, we experimentally implemented a non-Markovianity-assisted high-fidelity refined Deutsch-Jozsa algorithm (RDJA) with a solid spin in diamond. The memory effects can induce pronounced non-monotonic variations in the RDJA results, which were confirmed to follow a non-Markovian quantum process by measuring the non-Markovianity of the spin system. By applying the memory effects as physical resources with the assistance of dynamical decoupling, the probability of success of RDJA was elevated above 97% in the open quantum system. This study not only demonstrates that the non-Markovianity is an important physical resource but also presents a feasible way to employ this physical resource. It will stimulate the application of the memory effects in non-Markovian quantum dynamics to improve the performance of practical QIP.

  8. The Association of Aging and Aerobic Fitness With Memory

    PubMed Central

    Bullock, Alexis M.; Mizzi, Allison L.; Kovacevic, Ana; Heisz, Jennifer J.

    2018-01-01

    The present study examined the differential effects of aging and fitness on memory. Ninety-five young adults (YA) and 81 older adults (OA) performed the Mnemonic Similarity Task (MST) to assess high-interference memory and general recognition memory. Age-related differences in high-interference memory were observed across the lifespan, with performance progressively worsening from young to old. In contrast, age-related differences in general recognition memory were not observed until after 60 years of age. Furthermore, OA with higher aerobic fitness had better high-interference memory, suggesting that exercise may be an important lifestyle factor influencing this aspect of memory. Overall, these findings suggest different trajectories of decline for high-interference and general recognition memory, with a selective role for physical activity in promoting high-interference memory. PMID:29593524

  9. Department of Defense In-House RDT and E Activities: Management Analysis Report for Fiscal Year 1993

    DTIC Science & Technology

    1994-11-01

    A worldwide unique lab because it houses a high - speed modeling and simulation system, a prototype...E Division, San Diego, CA: High Performance Computing Laboratory providing a wide range of advanced computer systems for the scientific investigation...Machines CM-200 and a 256-node Thinking Machines CM-S. The CM-5 is in a very large memory, ( high performance 32 Gbytes, >4 0 OFlop) coafiguration,

  10. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  11. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  12. Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bolosky, William Joseph

    1993-01-01

    Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.

  13. Does visual short-term memory have a high-capacity stage?

    PubMed

    Matsukura, Michi; Hollingworth, Andrew

    2011-12-01

    Visual short-term memory (VSTM) has long been considered a durable, limited-capacity system for the brief retention of visual information. However, a recent work by Sligte et al. (Plos One 3:e1699, 2008) reported that, relatively early after the removal of a memory array, a cue allowed participants to access a fragile, high-capacity stage of VSTM that is distinct from iconic memory. In the present study, we examined whether this stage division is warranted by attempting to corroborate the existence of an early, high-capacity form of VSTM. The results of four experiments did not support Sligte et al.'s claim, since we did not obtain evidence for VSTM retention that exceeded traditional estimates of capacity. However, performance approaching that observed in Sligte et al. can be achieved through extensive practice, providing a clear explanation for their findings. Our evidence favors the standard view of VSTM as a limited-capacity system that maintains a few object representations in a relatively durable form.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Michael A.; Berry, Jonathan W.; Hammond, Simon D.

    A challenge in computer architecture is that processors often cannot be fed data from DRAM as fast as CPUs can consume it. Therefore, many applications are memory-bandwidth bound. With this motivation and the realization that traditional architectures (with all DRAM reachable only via bus) are insufficient to feed groups of modern processing units, vendors have introduced a variety of non-DDR 3D memory technologies (Hybrid Memory Cube (HMC),Wide I/O 2, High Bandwidth Memory (HBM)). These offer higher bandwidth and lower power by stacking DRAM chips on the processor or nearby on a silicon interposer. We will call these solutions “near-memory,” andmore » if user-addressable, “scratchpad.” High-performance systems on the market now offer two levels of main memory: near-memory on package and traditional DRAM further away. In the near term we expect the latencies near-memory and DRAM to be similar. Here, it is natural to think of near-memory as another module on the DRAM level of the memory hierarchy. Vendors are expected to offer modes in which the near memory is used as cache, but we believe that this will be inefficient.« less

  15. Flash memory management system and method utilizing multiple block list windows

    NASA Technical Reports Server (NTRS)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  16. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  17. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  18. PIMS: Memristor-Based Processing-in-Memory-and-Storage.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Jeanine

    Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less

  19. Multicore Architecture-aware Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinivasa, Avinash

    Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less

  20. Data Movement Dominates: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, Bruce L.

    Over the past three years in this project, what we have observed is that the primary reason for data movement in large-scale systems is that the per-node capacity is not large enough—i.e., one of the solutions to the data-movement problem (certainly not the only solution that is required, but a significant one nonetheless) is to increase per-node capacity so that inter-node traffic is reduced. This unfortunately is not as simple as it sounds. Today’s main memory systems for datacenters, enterprise computing systems, and supercomputers, fail to provide high per-socket capacity [Dirik & Jacob 2009; Cooper-Balis et al. 2012], except atmore » extremely high price points (factors of 10–100x the cost/bit of consumer main-memory systems) [Stokes 2008]. The reason is that our choice of technology for today’s main memory systems—i.e., DRAM, which we have used as a main-memory technology since the 1970s [Jacob et al. 2007]—can no longer keep up with our needs for density and price per bit. Main memory systems have always been built from the cheapest, densest, lowest-power memory technology available, and DRAM is no longer the cheapest, the densest, nor the lowest-power storage technology out there. It is now time for DRAM to go the way that SRAM went: move out of the way for a cheaper, slower, denser storage technology, and become a cache instead. This inflection point has happened before, in the context of SRAM yielding to DRAM. There was once a time that SRAM was the storage technology of choice for all main memories [Tomasulo 1967; Thornton 1970; Kidder 1981]. However, once DRAM hit volume production in the 1970s and 80s, it supplanted SRAM as a main memory technology because it was cheaper, and it was denser. It also happened to be lower power, but that was not the primary consideration of the day. At the time, it was recognized that DRAM was much slower than SRAM, but it was only at the supercomputer level (For instance the Cray X-MP in the 1980s and its follow-on, the Cray Y-MP, in the 1990s) that could one afford to build ever- larger main memories out of SRAM—the reasoning for moving to DRAM was that an appropriately designed memory hierarchy, built of DRAM as main memory and SRAM as a cache, would approach the performance of SRAM, at the price-per-bit of DRAM [Mashey 1999]. Today it is quite clear that, were one to build an entire multi-gigabyte main memory out of SRAM instead of DRAM, one could improve the performance of almost any computer system by up to an order of magnitude—but this option is not even considered, because to build that system would be prohibitively expensive. It is now time to revisit the same design choice in the context of modern technologies and modern systems. For reasons both technical and economic, we can no longer afford to build ever-larger main memory systems out of DRAM. Flash memory, on the other hand, is significantly cheaper and denser than DRAM and therefore should take its place. While it is true that flash is significantly slower than DRAM, one can afford to build much larger main memories out of flash than out of DRAM, and we show that an appropriately designed memory hierarchy, built of flash as main memory and DRAM as a cache, will approach the performance of DRAM, at the price-per-bit of flash. In our studies as part of this project, we have investigated Non-Volatile Main Memory (NVMM), a new main-memory architecture for large-scale computing systems, one that is specifically designed to address the weaknesses described previously. In particular, it provides the following features: non-volatility: The bulk of the storage is comprised of NAND flash, and in this organization DRAM is used only as a cache, not as main memory. Furthermore, the flash is journaled, which means that operations such as checkpoint/restore are already built into the system. 1+ terabytes of storage per socket: SSDs and DRAM DIMMs have roughly the same form factor (several square inches of PCB surface area), and terabyte SSDs are now commonplace. performance approaching that of DRAM: DRAM is used as a cache to the flash system. price-per-bit approaching that of NAND: Flash is currently well under $0.50 per gigabyte; DDR3 SDRAM is currently just over $10 per gigabyte [Newegg 2014]. Even today, one can build an easily affordable main memory system with a terabyte or more of NAND storage per CPU socket (which would be extremely expensive were one to use DRAM), and our cycle- accurate, full-system experiments show that this can be done at a performance point that lies within a factor of two of DRAM.« less

  1. Optoelectronic associative recall using motionless-head parallel readout optical disk

    NASA Astrophysics Data System (ADS)

    Marchand, P. J.; Krishnamoorthy, A. V.; Ambs, P.; Esener, S. C.

    1990-12-01

    High data rates, low retrieval times, and simple implementation are presently shown to be obtainable by means of a motionless-head 2D parallel-readout system for optical disks. Since the optical disk obviates mechanical head motions for access, focusing, and tracking, addressing is performed exclusively through the disk's rotation. Attention is given to a high-performance associative memory system configuration which employs a parallel readout disk.

  2. Better together: Left and right hemisphere engagement to reduce age-related memory loss.

    PubMed

    Brambilla, Michela; Manenti, Rosa; Ferrari, Clarissa; Cotelli, Maria

    2015-10-15

    Episodic memory is a cognitive function that appears more susceptible than others to the effects of aging. The main aim of this study is to investigate if the magnitude of functional hemispheric lateralization during episodic memory test was positively correlated with memory performance, proving the presence of a beneficial pattern of neural processing in high-performing older adults but not in low-performing participants. We have applied anodal transcranial Direct Current Stimulation (tDCS) or sham stimulation over left and right hemisphere in a group of young subjects and in high-performing and low-performing older participants during an experimental verbal episodic memory task. Remarkably, young individuals and high-performing older adults exhibited similar performances on episodic memory tasks and both groups showed symmetrical recruitment of left and right areas during memory retrieval. In contrast, low-performing older adults, who obtained lower scores on the memory tasks, demonstrated a greater engagement of the left hemisphere during verbal memory task. Furthermore, structural equation model was performed for analyzing the interrelations between the index of interhemispheric asymmetry and several neuropsychological domains. We found that the bilateral engagement of dorsolateral prefrontal cortex and parietal cortex regions had a direct correlation with memory and executive functions evaluated as latent constructs. These findings drew attention to brain maintenance hypothesis. The potential of neurostimulation in cognitive enhancement is particularly promising to prevent memory loss during aging. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. System-Level Radiation Hardening

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2014-01-01

    Although system-level radiation hardening can enable the use of high-performance components and enhance the capabilities of a spacecraft, hardening techniques can be costly and can compromise the very performance designers sought from the high-performance components. Moreover, such techniques often result in a complicated design, especially if several complex commercial microcircuits are used, each posing its own hardening challenges. The latter risk is particularly acute for Commercial-Off-The-Shelf components since high-performance parts (e.g. double-data-rate synchronous dynamic random access memories - DDR SDRAMs) may require other high-performance commercial parts (e.g. processors) to support their operation. For these reasons, it is essential that system-level radiation hardening be a coordinated effort, from setting requirements through testing up to and including validation.

  4. GABA level, gamma oscillation, and working memory performance in schizophrenia

    PubMed Central

    Chen, Chi-Ming A.; Stanford, Arielle D.; Mao, Xiangling; Abi-Dargham, Anissa; Shungu, Dikoma C.; Lisanby, Sarah H.; Schroeder, Charles E.; Kegeles, Lawrence S.

    2014-01-01

    A relationship between working memory impairment, disordered neuronal oscillations, and abnormal prefrontal GABA function has been hypothesized in schizophrenia; however, in vivo GABA measurements and gamma band neural synchrony have not yet been compared in schizophrenia. This case–control pilot study (N = 24) compared baseline and working memory task-induced neuronal oscillations acquired with high-density electroencephalograms (EEGs) to GABA levels measured in vivo with magnetic resonance spectroscopy. Working memory performance, baseline GABA level in the left dorsolateral prefrontal cortex (DLPFC), and measures of gamma oscillations from EEGs at baseline and during a working memory task were obtained. A major limitation of this study is a relatively small sample size for several analyses due to the integration of diverse methodologies and participant compliance. Working memory performance was significantly lower for patients than for controls. During the working memory task, patients (n = 7) had significantly lower amplitudes in gamma oscillations than controls (n = 9). However, both at rest and across working memory stages, there were significant correlations between gamma oscillation amplitude and left DLPFC GABA level. Peak gamma frequency during the encoding stage of the working memory task (n = 16) significantly correlated with GABA level and working memory performance. Despite gamma band amplitude deficits in patients across working memory stages, both baseline and working memory-induced gamma oscillations showed strong dependence on baseline GABA levels in patients and controls. These findings suggest a critical role for GABA function in gamma band oscillations, even under conditions of system and cognitive impairments as seen in schizophrenia. PMID:24749063

  5. GABA level, gamma oscillation, and working memory performance in schizophrenia.

    PubMed

    Chen, Chi-Ming A; Stanford, Arielle D; Mao, Xiangling; Abi-Dargham, Anissa; Shungu, Dikoma C; Lisanby, Sarah H; Schroeder, Charles E; Kegeles, Lawrence S

    2014-01-01

    A relationship between working memory impairment, disordered neuronal oscillations, and abnormal prefrontal GABA function has been hypothesized in schizophrenia; however, in vivo GABA measurements and gamma band neural synchrony have not yet been compared in schizophrenia. This case-control pilot study (N = 24) compared baseline and working memory task-induced neuronal oscillations acquired with high-density electroencephalograms (EEGs) to GABA levels measured in vivo with magnetic resonance spectroscopy. Working memory performance, baseline GABA level in the left dorsolateral prefrontal cortex (DLPFC), and measures of gamma oscillations from EEGs at baseline and during a working memory task were obtained. A major limitation of this study is a relatively small sample size for several analyses due to the integration of diverse methodologies and participant compliance. Working memory performance was significantly lower for patients than for controls. During the working memory task, patients (n = 7) had significantly lower amplitudes in gamma oscillations than controls (n = 9). However, both at rest and across working memory stages, there were significant correlations between gamma oscillation amplitude and left DLPFC GABA level. Peak gamma frequency during the encoding stage of the working memory task (n = 16) significantly correlated with GABA level and working memory performance. Despite gamma band amplitude deficits in patients across working memory stages, both baseline and working memory-induced gamma oscillations showed strong dependence on baseline GABA levels in patients and controls. These findings suggest a critical role for GABA function in gamma band oscillations, even under conditions of system and cognitive impairments as seen in schizophrenia.

  6. Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing

    DTIC Science & Technology

    2010-07-22

    dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain

  7. Two-level main memory co-design: Multi-threaded algorithmic primitives, analysis, and simulation

    DOE PAGES

    Bender, Michael A.; Berry, Jonathan W.; Hammond, Simon D.; ...

    2017-01-03

    A challenge in computer architecture is that processors often cannot be fed data from DRAM as fast as CPUs can consume it. Therefore, many applications are memory-bandwidth bound. With this motivation and the realization that traditional architectures (with all DRAM reachable only via bus) are insufficient to feed groups of modern processing units, vendors have introduced a variety of non-DDR 3D memory technologies (Hybrid Memory Cube (HMC),Wide I/O 2, High Bandwidth Memory (HBM)). These offer higher bandwidth and lower power by stacking DRAM chips on the processor or nearby on a silicon interposer. We will call these solutions “near-memory,” andmore » if user-addressable, “scratchpad.” High-performance systems on the market now offer two levels of main memory: near-memory on package and traditional DRAM further away. In the near term we expect the latencies near-memory and DRAM to be similar. Here, it is natural to think of near-memory as another module on the DRAM level of the memory hierarchy. Vendors are expected to offer modes in which the near memory is used as cache, but we believe that this will be inefficient.« less

  8. Performance model-directed data sieving for high-performance I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how tomore » perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.« less

  9. The hard fall effect: high working memory capacity leads to a higher, but less robust short-term memory performance.

    PubMed

    Thomassin, Noémylle; Gonthier, Corentin; Guerraz, Michel; Roulin, Jean-Luc

    2015-01-01

    Participants with a high working memory span tend to perform better than low spans in a variety of tasks. However, their performance is paradoxically more impaired when they have to perform two tasks at once, a phenomenon that could be labeled the "hard fall effect." The present study tested whether this effect exists in a short-term memory task, and investigated the proposal that the effect is due to high spans using efficient facilitative strategies under simple task conditions. Ninety-eight participants performed a spatial short-term memory task under simple and dual task conditions; stimuli presentation times either allowed for the use of complex facilitative strategies or not. High spans outperformed low spans only under simple task conditions when presentation times allowed for the use of facilitative strategies. These results indicate that the hard fall effect exists on a short-term memory task and may be caused by individual differences in strategy use.

  10. The future of memory

    NASA Astrophysics Data System (ADS)

    Marinella, M.

    In the not too distant future, the traditional memory and storage hierarchy of may be replaced by a single Storage Class Memory (SCM) device integrated on or near the logic processor. Traditional magnetic hard drives, NAND flash, DRAM, and higher level caches (L2 and up) will be replaced with a single high performance memory device. The Storage Class Memory paradigm will require high speed (< 100 ns read/write), excellent endurance (> 1012), nonvolatility (retention > 10 years), and low switching energies (< 10 pJ per switch). The International Technology Roadmap for Semiconductors (ITRS) has recently evaluated several potential candidates SCM technologies, including Resistive (or Redox) RAM, Spin Torque Transfer RAM (STT-MRAM), and phase change memory (PCM). All of these devices show potential well beyond that of current flash technologies and research efforts are underway to improve the endurance, write speeds, and scalabilities to be on-par with DRAM. This progress has interesting implications for space electronics: each of these emerging device technologies show excellent resistance to the types of radiation typically found in space applications. Commercially developed, high density storage class memory-based systems may include a memory that is physically radiation hard, and suitable for space applications without major shielding efforts. This paper reviews the Storage Class Memory concept, emerging memory devices, and possible applicability to radiation hardened electronics for space.

  11. Engineering non-linear resonator mode interactions in circuit QED by continuous driving: Manipulation of a photonic quantum memory

    NASA Astrophysics Data System (ADS)

    Reagor, Matthew; Pfaff, Wolfgang; Heeres, Reinier; Ofek, Nissim; Chou, Kevin; Blumoff, Jacob; Leghtas, Zaki; Touzard, Steven; Sliwa, Katrina; Holland, Eric; Albert, Victor V.; Frunzio, Luigi; Devoret, Michel H.; Jiang, Liang; Schoelkopf, Robert J.

    2015-03-01

    Recent advances in circuit QED have shown great potential for using microwave resonators as quantum memories. In particular, it is possible to encode the state of a quantum bit in non-classical photonic states inside a high-Q linear resonator. An outstanding challenge is to perform controlled operations on such a photonic state. We demonstrate experimentally how a continuous drive on a transmon qubit coupled to a high-Q storage resonator can be used to induce non-linear dynamics of the resonator. Tailoring the drive properties allows us to cancel or enhance non-linearities in the system such that we can manipulate the state stored in the cavity. This approach can be used to either counteract undesirable evolution due to the bare Hamiltonian of the system or, ultimately, to perform logical operations on the state encoded in the cavity field. Our method provides a promising pathway towards performing universal control for quantum states stored in high-coherence resonators in the circuit QED platform.

  12. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  13. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  14. Unraveling the complexities of circadian and sleep interactions with memory formation through invertebrate research

    PubMed Central

    Michel, Maximilian; Lyons, Lisa C.

    2014-01-01

    Across phylogeny, the endogenous biological clock has been recognized as providing adaptive advantages to organisms through coordination of physiological and behavioral processes. Recent research has emphasized the role of circadian modulation of memory in generating peaks and troughs in cognitive performance. The circadian clock along with homeostatic processes also regulates sleep, which itself impacts the formation and consolidation of memory. Thus, the circadian clock, sleep and memory form a triad with ongoing dynamic interactions. With technological advances and the development of a global 24/7 society, understanding the mechanisms underlying these connections becomes pivotal for development of therapeutic treatments for memory disorders and to address issues in cognitive performance arising from non-traditional work schedules. Invertebrate models, such as Drosophila melanogaster and the mollusks Aplysia and Lymnaea, have proven invaluable tools for identification of highly conserved molecular processes in memory. Recent research from invertebrate systems has outlined the influence of sleep and the circadian clock upon synaptic plasticity. In this review, we discuss the effects of the circadian clock and sleep on memory formation in invertebrates drawing attention to the potential of in vivo and in vitro approaches that harness the power of simple invertebrate systems to correlate individual cellular processes with complex behaviors. In conclusion, this review highlights how studies in invertebrates with relatively simple nervous systems can provide mechanistic insights into corresponding behaviors in higher organisms and can be used to outline possible therapeutic options to guide further targeted inquiry. PMID:25136297

  15. Central Executive Dysfunction and Deferred Prefrontal Processing in Veterans with Gulf War Illness.

    PubMed

    Hubbard, Nicholas A; Hutchison, Joanna L; Motes, Michael A; Shokri-Kojori, Ehsan; Bennett, Ilana J; Brigante, Ryan M; Haley, Robert W; Rypma, Bart

    2014-05-01

    Gulf War Illness is associated with toxic exposure to cholinergic disruptive chemicals. The cholinergic system has been shown to mediate the central executive of working memory (WM). The current work proposes that impairment of the cholinergic system in Gulf War Illness patients (GWIPs) leads to behavioral and neural deficits of the central executive of WM. A large sample of GWIPs and matched controls (MCs) underwent functional magnetic resonance imaging during a varied-load working memory task. Compared to MCs, GWIPs showed a greater decline in performance as WM-demand increased. Functional imaging suggested that GWIPs evinced separate processing strategies, deferring prefrontal cortex activity from encoding to retrieval for high demand conditions. Greater activity during high-demand encoding predicted greater WM performance. Behavioral data suggest that WM executive strategies are impaired in GWIPs. Functional data further support this hypothesis and suggest that GWIPs utilize less effective strategies during high-demand WM.

  16. Central Executive Dysfunction and Deferred Prefrontal Processing in Veterans with Gulf War Illness

    PubMed Central

    Hubbard, Nicholas A.; Hutchison, Joanna L.; Motes, Michael A.; Shokri-Kojori, Ehsan; Bennett, Ilana J.; Brigante, Ryan M.; Haley, Robert W.; Rypma, Bart

    2015-01-01

    Gulf War Illness is associated with toxic exposure to cholinergic disruptive chemicals. The cholinergic system has been shown to mediate the central executive of working memory (WM). The current work proposes that impairment of the cholinergic system in Gulf War Illness patients (GWIPs) leads to behavioral and neural deficits of the central executive of WM. A large sample of GWIPs and matched controls (MCs) underwent functional magnetic resonance imaging during a varied-load working memory task. Compared to MCs, GWIPs showed a greater decline in performance as WM-demand increased. Functional imaging suggested that GWIPs evinced separate processing strategies, deferring prefrontal cortex activity from encoding to retrieval for high demand conditions. Greater activity during high-demand encoding predicted greater WM performance. Behavioral data suggest that WM executive strategies are impaired in GWIPs. Functional data further support this hypothesis and suggest that GWIPs utilize less effective strategies during high-demand WM. PMID:25767746

  17. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    PubMed

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  18. Memory characteristics of metal-oxide-semiconductor structures based on Ge nanoclusters-embedded GeO(x) films grown at low temperature.

    PubMed

    Lin, Tzu-Shun; Lou, Li-Ren; Lee, Ching-Ting; Tsai, Tai-Cheng

    2012-03-01

    The memory devices constructed from the Ge-nanoclusters embedded GeO(x) layer deposited by the laser-assisted chemical vapor deposition (LACVD) system were fabricated. The Ge nanoclusters were observed by a high-resolution transmission electron microscopy. Using the capacitance versus voltage (C-V) and the conductance versus voltage (G-V) characteristics measured under various frequencies, the memory effect observed in the C-V curves was dominantly attributed to the charge storage in the Ge nanoclusters. Furthermore, the defects existed in the deposited film and the interface states were insignificant to the memory performances. Capacitance versus time (C-t) measurement was also executed to evaluate the charge retention characteristics. The charge storage and retention behaviors of the devices demonstrated that the Ge nanoclusters grown by the LACVD system at low temperature are promising for memory device applications.

  19. One declarative memory system or two? The relationship between episodic and semantic memory in children with temporal lobe epilepsy.

    PubMed

    Smith, Mary Lou; Lah, Suncica

    2011-09-01

    This study explored verbal semantic and episodic memory in children with unilateral temporal lobe epilepsy to determine whether they had impairments in both or only 1 aspect of memory, and to examine relations between performance in the 2 domains. Sixty-six children and adolescents (37 with seizures of left temporal lobe onset, 29 with right-sided onset) were given 4 tasks assessing different aspects of semantic memory (picture naming, fluency, knowledge of facts, knowledge of word meanings) and 2 episodic memory tasks (story recall, word list recall). High rates of impairments were observed across tasks, and no differences were found related to the laterality of the seizures. Individual patient analyses showed that there was a double dissociation between the 2 aspects of memory in that some children were impaired on episodic but not semantic memory, whereas others showed intact episodic but impaired semantic memory. This double dissociation suggests that these 2 memory systems may develop independently in the context of temporal lobe pathology, perhaps related to differential effects of dysfunction in the lateral and mesial temporal lobe structures. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  20. FPGA-based prototype storage system with phase change memory

    NASA Astrophysics Data System (ADS)

    Li, Gezi; Chen, Xiaogang; Chen, Bomy; Li, Shunfen; Zhou, Mi; Han, Wenbing; Song, Zhitang

    2016-10-01

    With the ever-increasing amount of data being stored via social media, mobile telephony base stations, and network devices etc. the database systems face severe bandwidth bottlenecks when moving vast amounts of data from storage to the processing nodes. At the same time, Storage Class Memory (SCM) technologies such as Phase Change Memory (PCM) with unique features like fast read access, high density, non-volatility, byte-addressability, positive response to increasing temperature, superior scalability, and zero standby leakage have changed the landscape of modern computing and storage systems. In such a scenario, we present a storage system called FLEET which can off-load partial or whole SQL queries to the storage engine from CPU. FLEET uses an FPGA rather than conventional CPUs to implement the off-load engine due to its highly parallel nature. We have implemented an initial prototype of FLEET with PCM-based storage. The results demonstrate that significant performance and CPU utilization gains can be achieved by pushing selected query processing components inside in PCM-based storage.

  1. Synaptophysin and the dopaminergic system in hippocampus are involved in the protective effect of rutin against trimethyltin-induced learning and memory impairment.

    PubMed

    Zhang, Lei; Zhao, Qi; Chen, Chun-Hai; Qin, Qi-Zhong; Zhou, Zhou; Yu, Zheng-Ping

    2014-09-01

    This study aimed to investigate the protective effect of rutin against trimethyltin-induced spatial learning and memory impairment in mice. This study focused on the role of synaptophysin, growth-associated protein 43 and the action of the dopaminergic system in mechanisms associated with rutin protection and trimethyltin-induced spatial learning and memory impairment. Cognitive learning and memory was measured by Morris Water Maze. The expression of synaptophysin and growth-associated protein 43 in hippocampus was analyzed by western blot. The concentrations of dopamine, homovanillic acid, and dihyroxyphenylacetic acid in hippocampus were detected using reversed phase high-performance liquid chromatography with electrochemical detection. Trimethyltin-induced spatial learning impairment showed a dose-dependent mode. Synaptophysin but not growth-associated protein 43 was decreased in the hippocampus after trimethyltin administration. The concentration of dopamine decreased, while homovanillic acid increased in the hippocampus after trimethyltin administration. Mice pretreated with 20 mg/kg of rutin for 7 consecutive days exhibited improved water maze performance. Moreover, rutin pretreatment reversed the decrease of synaptophysin expression and dopamine alteration. These results suggest that rutin may protect against spatial memory impairment induced by trimethyltin. Synaptophysin and the dopaminergic system may be involved in trimethyltin-induced neuronal damage in hippocampus.

  2. Light-erasable embedded charge-trapping memory based on MoS2 for system-on-panel applications

    NASA Astrophysics Data System (ADS)

    He, Long-Fei; Zhu, Hao; Xu, Jing; Liu, Hao; Nie, Xin-Ran; Chen, Lin; Sun, Qing-Qing; Xia, Yang; Wei Zhang, David

    2017-11-01

    The continuous scaling and challenges in device integrations in modern portable electronic products have aroused many scientific interests, and a great deal of effort has been made in seeking solutions towards a more microminiaturized package assembled with smaller and more powerful components. In this study, an embedded light-erasable charge-trapping memory with a high-k dielectric stack (Al2O3/HfO2/Al2O3) and an atomically thin MoS2 channel has been fabricated and fully characterized. The memory exhibits a sufficient memory window, fast programming and erasing (P/E) speed, and high On/Off current ratio up to 107. Less than 25% memory window degradation is observed after projected 10-year retention, and the device functions perfectly after 8000 P/E operation cycles. Furthermore, the programmed device can be fully erased by incident light without electrical assistance. Such excellent memory performance originates from the intrinsic properties of two-dimensional (2D) MoS2 and the engineered back-gate dielectric stack. Our integration of 2D semiconductors in the infrastructure of light-erasable charge-trapping memory is very promising for future system-on-panel applications like storage of metadata and flexible imaging arrays.

  3. Building a Terabyte Memory Bandwidth Compute Node with Four Consumer Electronics GPUs

    NASA Astrophysics Data System (ADS)

    Omlin, Samuel; Räss, Ludovic; Podladchikov, Yuri

    2014-05-01

    GPUs released for consumer electronics are generally built with the same chip architectures as the GPUs released for professional usage. With regards to scientific computing, there are no obvious important differences in functionality or performance between the two types of releases, yet the price can differ up to one order of magnitude. For example, the consumer electronics release of the most recent NVIDIA Kepler architecture (GK110), named GeForce GTX TITAN, performed equally well in conducted memory bandwidth tests as the professional release, named Tesla K20; the consumer electronics release costs about one third of the professional release. We explain how to design and assemble a well adjusted computer with four high-end consumer electronics GPUs (GeForce GTX TITAN) combining more than 1 terabyte/s memory bandwidth. We compare the system's performance and precision with the one of hardware released for professional usage. The system can be used as a powerful workstation for scientific computing or as a compute node in a home-built GPU cluster.

  4. Serotonin is critical for rewarded olfactory short-term memory in Drosophila.

    PubMed

    Sitaraman, Divya; LaFerriere, Holly; Birman, Serge; Zars, Troy

    2012-06-01

    The biogenic amines dopamine, octopamine, and serotonin are critical in establishing normal memories. A common view for the amines in insect memory performance has emerged in which dopamine and octopamine are largely responsible for aversive and appetitive memories. Examination of the function of serotonin begins to challenge the notion of one amine type per memory because altering serotonin function also reduces aversive olfactory memory and place memory levels. Could the function of serotonin be restricted to the aversive domain, suggesting a more specific dopamine/serotonin system interaction? The function of the serotonergic system in appetitive olfactory memory was examined. By targeting the tetanus toxin light chain (TNT) and the human inwardly rectifying potassium channel (Kir2.1) to the serotonin neurons with two different GAL4 driver combinations, the serotonergic system was inhibited. Additional use of the GAL80(ts1) system to control expression of transgenes to the adult stage of the life cycle addressed a potential developmental role of serotonin in appetitive memory. Reduction in appetitive olfactory memory performance in flies with these transgenic manipulations, without altering control behaviors, showed that the serotonergic system is also required for normal appetitive memory. Thus, serotonin appears to have a more general role in Drosophila memory, and implies an interaction with both the dopaminergic and octopaminergic systems.

  5. Behavioral and Neural Manifestations of Reward Memory in Carriers of Low-Expressing versus High-Expressing Genetic Variants of the Dopamine D2 Receptor

    PubMed Central

    Richter, Anni; Barman, Adriana; Wüstenberg, Torsten; Soch, Joram; Schanze, Denny; Deibele, Anna; Behnisch, Gusalija; Assmann, Anne; Klein, Marieke; Zenker, Martin; Seidenbecher, Constanze; Schott, Björn H.

    2017-01-01

    Dopamine is critically important in the neural manifestation of motivated behavior, and alterations in the human dopaminergic system have been implicated in the etiology of motivation-related psychiatric disorders, most prominently addiction. Patients with chronic addiction exhibit reduced dopamine D2 receptor (DRD2) availability in the striatum, and the DRD2 TaqIA (rs1800497) and C957T (rs6277) genetic polymorphisms have previously been linked to individual differences in striatal dopamine metabolism and clinical risk for alcohol and nicotine dependence. Here, we investigated the hypothesis that the variants of these polymorphisms would show increased reward-related memory formation, which has previously been shown to jointly engage the mesolimbic dopaminergic system and the hippocampus, as a potential intermediate phenotype for addiction memory. To this end, we performed functional magnetic resonance imaging (fMRI) in 62 young, healthy individuals genotyped for DRD2 TaqIA and C957T variants. Participants performed an incentive delay task, followed by a recognition memory task 24 h later. We observed effects of both genotypes on the overall recognition performance with carriers of low-expressing variants, namely TaqIA A1 carriers and C957T C homozygotes, showing better performance than the other genotype groups. In addition to the better memory performance, C957T C homozygotes also exhibited a response bias for cues predicting monetary reward. At the neural level, the C957T polymorphism was associated with a genotype-related modulation of right hippocampal and striatal fMRI responses predictive of subsequent recognition confidence for reward-predicting items. Our results indicate that genetic variations associated with DRD2 expression affect explicit memory, specifically for rewarded stimuli. We suggest that the relatively better memory for rewarded stimuli in carriers of low-expressing DRD2 variants may reflect an intermediate phenotype of addiction memory. PMID:28507526

  6. Incorporation of memory effects in coarse-grained modeling via the Mori-Zwanzig formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu

    2015-12-28

    The Mori-Zwanzig formalism for coarse-graining a complex dynamical system typically introduces memory effects. The Markovian assumption of delta-correlated fluctuating forces is often employed to simplify the formulation of coarse-grained (CG) models and numerical implementations. However, when the time scales of a system are not clearly separated, the memory effects become strong and the Markovian assumption becomes inaccurate. To this end, we incorporate memory effects into CG modeling by preserving non-Markovian interactions between CG variables, and the memory kernel is evaluated directly from microscopic dynamics. For a specific example, molecular dynamics (MD) simulations of star polymer melts are performed while themore » corresponding CG system is defined by grouping many bonded atoms into single clusters. Then, the effective interactions between CG clusters as well as the memory kernel are obtained from the MD simulations. The constructed CG force field with a memory kernel leads to a non-Markovian dissipative particle dynamics (NM-DPD). Quantitative comparisons between the CG models with Markovian and non-Markovian approximations indicate that including the memory effects using NM-DPD yields similar results as the Markovian-based DPD if the system has clear time scale separation. However, for systems with small separation of time scales, NM-DPD can reproduce correct short-time properties that are related to how the system responds to high-frequency disturbances, which cannot be captured by the Markovian-based DPD model.« less

  7. Using VirtualGL/TurboVNC Software on the Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and

  8. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    ERIC Educational Resources Information Center

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this…

  9. Application of morphological associative memories and Fourier descriptors for classification of noisy subsurface signatures

    NASA Astrophysics Data System (ADS)

    Ortiz, Jorge L.; Parsiani, Hamed; Tolstoy, Leonid

    2004-02-01

    This paper presents a method for recognition of Noisy Subsurface Images using Morphological Associative Memories (MAM). MAM are type of associative memories that use a new kind of neural networks based in the algebra system known as semi-ring. The operations performed in this algebraic system are highly nonlinear providing additional strength when compared to other transformations. Morphological associative memories are a new kind of neural networks that provide a robust performance with noisy inputs. Two representations of morphological associative memories are used called M and W matrices. M associative memory provides a robust association with input patterns corrupted by dilative random noise, while the W associative matrix performs a robust recognition in patterns corrupted with erosive random noise. The robust performance of MAM is used in combination of the Fourier descriptors for the recognition of underground objects in Ground Penetrating Radar (GPR) images. Multiple 2-D GPR images of a site are made available by NASA-SSC center. The buried objects in these images appear in the form of hyperbolas which are the results of radar backscatter from the artifacts or objects. The Fourier descriptors of the prototype hyperbola-like and shapes from non-hyperbola shapes in the sub-surface images are used to make these shapes scale-, shift-, and rotation-invariant. Typical hyperbola-like and non-hyperbola shapes are used to calculate the morphological associative memories. The trained MAMs are used to process other noisy images to detect the presence of these underground objects. The outputs from the MAM using the noisy patterns may be equal to the training prototypes, providing a positive identification of the artifacts. The results are images with recognized hyperbolas which indicate the presence of buried artifacts. A model using MATLAB has been developed and results are presented.

  10. Individual differences in event-based prospective memory: Evidence for multiple processes supporting cue detection.

    PubMed

    Brewer, Gene A; Knight, Justin B; Marsh, Richard L; Unsworth, Nash

    2010-04-01

    The multiprocess view proposes that different processes can be used to detect event-based prospective memory cues, depending in part on the specificity of the cue. According to this theory, attentional processes are not necessary to detect focal cues, whereas detection of nonfocal cues requires some form of controlled attention. This notion was tested using a design in which we compared performance on a focal and on a nonfocal prospective memory task by participants with high or low working memory capacity. An interaction was found, such that participants with high and low working memory performed equally well on the focal task, whereas the participants with high working memory performed significantly better on the nonfocal task than did their counterparts with low working memory. Thus, controlled attention was only necessary for detecting event-based prospective memory cues in the nonfocal task. These results have implications for theories of prospective memory, the processes necessary for cue detection, and the successful fulfillment of intentions.

  11. Scalability Analysis of Gleipnir: A Memory Tracing and Profiling Tool, on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos; Wang, Dali

    2013-01-01

    Application performance is hindered by a variety of factors but most notably driven by the well know CPU-memory speed gap (also known as the memory wall). Understanding application s memory behavior is key if we are trying to optimize performance. Understanding application performance properties is facilitated with various performance profiling tools. The scope of profiling tools varies in complexity, ease of deployment, profiling performance, and the detail of profiled information. Specifically, using profiling tools for performance analysis is a common task when optimizing and understanding scientific applications on complex and large scale systems such as Cray s XK7. This papermore » describes the performance characteristics of using Gleipnir, a memory tracing tool, on the Titan Cray XK7 system when instrumenting large applications such as the Community Earth System Model. Gleipnir is a memory tracing tool built as a plug-in tool for the Valgrind instrumentation framework. The goal of Gleipnir is to provide fine-grained trace information. The generated traces are a stream of executed memory transactions mapped to internal structures per process, thread, function, and finally the data structure or variable. Our focus was to expose tool performance characteristics when using Gleipnir with a combination of an external tools such as a cache simulator, Gl CSim, to characterize the tool s overall performance. In this paper we describe our experience with deploying Gleipnir on the Titan Cray XK7 system, report on the tool s ease-of-use, and analyze run-time performance characteristics under various workloads. While all performance aspects are important we mainly focus on I/O characteristics analysis due to the emphasis on the tools output which are trace-files. Moreover, the tool is dependent on the run-time system to provide the necessary infrastructure to expose low level system detail; therefore, we also discuss any theoretical benefits that can be achieved if such modules were present.« less

  12. The sensory components of high-capacity iconic memory and visual working memory.

    PubMed

    Bradley, Claire; Pearson, Joel

    2012-01-01

    EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.

  13. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less

  14. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshii, K.; Iskra, K.; Naik, H.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less

  15. Effects of high-dose ethanol intoxication and hangover on cognitive flexibility.

    PubMed

    Wolff, Nicole; Gussek, Philipp; Stock, Ann-Kathrin; Beste, Christian

    2018-01-01

    The effects of high-dose ethanol intoxication on cognitive flexibility processes are not well understood, and processes related to hangover after intoxication have remained even more elusive. Similarly, it is unknown in how far the complexity of cognitive flexibility processes is affected by intoxication and hangover effects. We performed a neurophysiological study applying high density electroencephalography (EEG) recording to analyze event-related potentials (ERPs) and perform source localization in a task switching paradigm which varied the complexity of task switching by means of memory demands. The results show that high-dose ethanol intoxication only affects task switching (i.e. cognitive flexibility processes) when memory processes are required to control task switching mechanisms, suggesting that even high doses of ethanol compromise cognitive processes when they are highly demanding. The EEG and source localization data show that these effects unfold by modulating response selection processes in the anterior cingulate cortex. Perceptual and attentional selection processes as well as working memory processes were only unspecifically modulated. In all subprocesses examined, there were no differences between the sober and hangover states, thus suggesting a fast recovery of cognitive flexibility after high-dose ethanol intoxication. We assume that the gamma-aminobutyric acid (GABAergic) system accounts for the observed effects, while they can hardly be explained by the dopaminergic system. © 2016 Society for the Study of Addiction.

  16. Electrical Stimulation Modulates High γ Activity and Human Memory Performance

    PubMed Central

    Berry, Brent M.; Miller, Laura R.; Khadjevand, Fatemeh; Ezzyat, Youssef; Wanda, Paul; Sperling, Michael R.; Lega, Bradley; Stead, S. Matt

    2018-01-01

    Direct electrical stimulation of the brain has emerged as a powerful treatment for multiple neurological diseases, and as a potential technique to enhance human cognition. Despite its application in a range of brain disorders, it remains unclear how stimulation of discrete brain areas affects memory performance and the underlying electrophysiological activities. Here, we investigated the effect of direct electrical stimulation in four brain regions known to support declarative memory: hippocampus (HP), parahippocampal region (PH) neocortex, prefrontal cortex (PF), and lateral temporal cortex (TC). Intracranial EEG recordings with stimulation were collected from 22 patients during performance of verbal memory tasks. We found that high γ (62–118 Hz) activity induced by word presentation was modulated by electrical stimulation. This modulatory effect was greatest for trials with “poor” memory encoding. The high γ modulation correlated with the behavioral effect of stimulation in a given brain region: it was negative, i.e., the induced high γ activity was decreased, in the regions where stimulation decreased memory performance, and positive in the lateral TC where memory enhancement was observed. Our results suggest that the effect of electrical stimulation on high γ activity induced by word presentation may be a useful biomarker for mapping memory networks and guiding therapeutic brain stimulation. PMID:29404403

  17. Modulation of memory and visuospatial processes by biperiden and rivastigmine in elderly healthy subjects.

    PubMed

    Wezenberg, E; Verkes, R J; Sabbe, B G C; Ruigt, G S F; Hulstijn, W

    2005-09-01

    The central cholinergic system is implicated in cognitive functioning. The dysfunction of this system is expressed in many diseases like Alzheimer's disease, dementia of Lewy body, Parkinson's disease and vascular dementia. In recent animal studies, it was found that selective cholinergic modulation affects visuospatial processes even more than memory function. In the current study, we tried to replicate those findings. In order to investigate the acute effects of cholinergic drugs on memory and visuospatial functions, a selective anticholinergic drug, biperiden, was compared to a selective acetylcholinesterase-inhibiting drug, rivastigmine, in healthy elderly subjects. A double-blind, placebo-controlled, randomised, cross-over study was performed in 16 healthy, elderly volunteers (eight men, eight women; mean age 66.1, SD 4.46 years). All subjects received biperiden (2 mg), rivastigmine (3 mg) and placebo with an interval of 7 days between them. Testing took place 1 h after drug intake (which was around Tmax for both drugs). Subjects were presented with tests for episodic memory (wordlist and picture memory), working memory tasks (N-back, symbol recall) and motor learning (maze task, pursuit rotor). Visuospatial abilities were assessed by tests with high visual scanning components (tangled lines and Symbol Digit Substitution Test). Episodic memory was impaired by biperiden. Rivastigmine impaired recognition parts of the episodic memory performance. Working memory was non-significantly impaired by biperiden and not affected by rivastigmine. Motor learning as well as visuospatial processes were impaired by biperiden and improved by rivastigmine. These results implicate acetylcholine as a modulator not only of memory but also of visuospatial abilities.

  18. A test of the reward-value hypothesis.

    PubMed

    Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D

    2017-03-01

    Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.

  19. Improved memory for reward cues following acute buprenorphine administration in humans.

    PubMed

    Syal, Supriya; Ipser, Jonathan; Terburg, David; Solms, Mark; Panksepp, Jaak; Malcolm-Smith, Susan; Bos, Peter A; Montoya, Estrella R; Stein, Dan J; van Honk, Jack

    2015-03-01

    In rodents, there is abundant evidence for the involvement of the opioid system in the processing of reward cues, but this system has remained understudied in humans. In humans, the happy facial expression is a pivotal reward cue. Happy facial expressions activate the brain's reward system and are disregarded by subjects scoring high on depressive mood who are low in reward drive. We investigated whether a single 0.2mg administration of the mixed mu-opioid agonist/kappa-antagonist, buprenorphine, would influence short-term memory for happy, angry or fearful expressions relative to neutral faces. Healthy human subjects (n38) participated in a randomized placebo-controlled within-subject design, and performed an emotional face relocation task after administration of buprenorphine and placebo. We show that, compared to placebo, buprenorphine administration results in a significant improvement of memory for happy faces. Our data demonstrate that acute manipulation of the opioid system by buprenorphine increases short-term memory for social reward cues. Copyright © 2015. Published by Elsevier Ltd.

  20. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  1. Dose-dependent effects of hydrocortisone infusion on autobiographical memory recall

    PubMed Central

    Young, Kymberly; Drevets, Wayne C.; Schulkin, Jay; Erickson, Kristine

    2011-01-01

    The glucocorticoid hormone cortisol has been shown to impair episodic memory performance. The present study examined the effect of two doses of hydrocortisone (synthetic cortisol) administration on autobiographical memory retrieval. Healthy volunteers (n=66) were studied on two separate visits, during which they received placebo and either moderate-dose (0.15 mg/kg IV; n=33) or high-dose (0.45 mg/kg IV; n=33) hydrocortisone infusion. From 75 to 150 min post-infusion subjects performed an Autobiographical Memory Test and the California Verbal Learning Test (CVLT). The high-dose hydrocortisone administration reduced the percent of specific memories recalled (p = 0.04), increased the percent of categorical (nonspecific) memories recalled, and slowed response times for categorical memories (p <0.001), compared to placebo performance (p < 0.001). Under moderate-dose hydrocortisone the autobiographical memory performance did not change significantly with respect to percent of specific or categorical memories recalled or reaction times. Performance on the CVLT was not affected by hydrocortisone. These findings suggest that cortisol affects accessibility of autobiographical memories in a dose-dependent manner. Specifically, administration of hydrocortisone at doses analogous to those achieved under severe psychosocial stress impaired the specificity and speed of retrieval of autobiographical memories. PMID:21942435

  2. Because difficulty is not the same for everyone: the impact of complexity in working memory is associated with cannabinoid 1 receptor genetic variation in young adults.

    PubMed

    Ruiz-Contreras, Alejandra E; Román-López, Talía V; Caballero-Sánchez, Ulises; Rosas-Escobar, Cintia B; Ortega-Mora, E Ivett; Barrera-Tlapa, Miguel A; Romero-Hidalgo, Sandra; Carrillo-Sánchez, Karol; Hernández-Morales, Salvador; Vadillo-Ortega, Felipe; González-Barrios, Juan Antonio; Méndez-Díaz, Mónica; Prospéro-García, Oscar

    2017-03-01

    Individual differences in working memory ability are mainly revealed when a demanding challenge is imposed. Here, we have associated cannabinoid 1 (CB1) receptor genetic variation rs2180619 (AA, AG, GG), which is located in a potential CNR1 regulatory sequence, with performance in working memory. Two-hundred and nine Mexican-mestizo healthy young participants (89 women, 120 men, mean age: 23.26 years, SD = 2.85) were challenged to solve a medium (2-back) vs. a high (3-back) difficulty N-back tasks. All subjects responded as expected, performance was better with the medium than the high demand task version, but no differences were found among genotypes while performing each working memory (WM) task. However, the cost of the level of complexity in N-back paradigm was double for GG subjects than for AA subjects. It is noteworthy that an additive-dosage allele relation was found for G allele in terms of cost of level of complexity. These genetic variation results support that the endocannabinoid system, evaluated by rs2180619 polymorphism, is involved in WM ability in humans.

  3. Memory protection

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.

  4. Working memory overload: fronto-limbic interactions and effects on subsequent working memory function.

    PubMed

    Yun, Richard J; Krystal, John H; Mathalon, Daniel H

    2010-03-01

    The human working memory system provides an experimentally useful model for examination of neural overload effects on subsequent functioning of the overloaded system. This study employed functional magnetic resonance imaging in conjunction with a parametric working memory task to characterize the behavioral and neural effects of cognitive overload on subsequent cognitive performance, with particular attention to cognitive-limbic interactions. Overloading the working memory system was associated with varying degrees of subsequent decline in performance accuracy and reduced activation of brain regions central to both task performance and suppression of negative affect. The degree of performance decline was independently predicted by three separate factors operating during the overload condition: the degree of task failure, the degree of amygdala activation, and the degree of inverse coupling between the amygdala and dorsolateral prefrontal cortex. These findings suggest that vulnerability to overload effects in cognitive functioning may be mediated by reduced amygdala suppression and subsequent amygdala-prefrontal interaction.

  5. Hormonal modulation of novelty processing in women: Enhanced under working memory load with high dehydroepiandrosterone-sulfate-to-dehydroepiandrosterone ratios.

    PubMed

    do Vale, Sónia; Selinger, Lenka; Martins, João Martin; Bicho, Manuel; do Carmo, Isabel; Escera, Carles

    2016-11-10

    Several studies have suggested that dehydroepiandrosterone (DHEA) and dehydroepiandrosterone-sulfate (DHEAS) may enhance working memory and attention, yet current evidence is still inconclusive. The balance between both forms of the hormone might be crucial regarding the effects that DHEA and DHEAS exert on the central nervous system. To test the hypothesis that higher DHEAS-to-DHEA ratios might enhance working memory and/or involuntary attention, we studied the DHEAS-to-DHEA ratio in relation to involuntary attention and working memory processing by recording the electroencephalogram of 22 young women while performing a working memory load task and a task without working memory load in an audio-visual oddball paradigm. DHEA and DHEAS were measured in saliva before each task. We found that a higher DHEAS-to-DHEA ratio was related to enhanced auditory novelty-P3 amplitudes during performance of the working memory task, indicating an increased processing of the distracter, while on the other hand there was no difference in the processing of the visual target. These results suggest that the balance between DHEAS and DHEA levels modulates involuntary attention during the performance of a task with cognitive load without interfering with the processing of the task-relevant visual stimulus. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Systems and methods for rapid processing and storage of data

    DOEpatents

    Stalzer, Mark A.

    2017-01-24

    Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.

  7. Memory systems in schizophrenia: Modularity is preserved but deficits are generalized.

    PubMed

    Haut, Kristen M; Karlsgodt, Katherine H; Bilder, Robert M; Congdon, Eliza; Freimer, Nelson B; London, Edythe D; Sabb, Fred W; Ventura, Joseph; Cannon, Tyrone D

    2015-10-01

    Schizophrenia patients exhibit impaired working and episodic memory, but this may represent generalized impairment across memory modalities or performance deficits restricted to particular memory systems in subgroups of patients. Furthermore, it is unclear whether deficits are unique from those associated with other disorders. Healthy controls (n=1101) and patients with schizophrenia (n=58), bipolar disorder (n=49) and attention-deficit-hyperactivity-disorder (n=46) performed 18 tasks addressing primarily verbal and spatial episodic and working memory. Effect sizes for group contrasts were compared across tasks and the consistency of subjects' distributional positions across memory domains was measured. Schizophrenia patients performed poorly relative to the other groups on every test. While low to moderate correlation was found between memory domains (r=.320), supporting modularity of these systems, there was limited agreement between measures regarding each individual's task performance (ICC=.292) and in identifying those individuals falling into the lowest quintile (kappa=0.259). A general ability factor accounted for nearly all of the group differences in performance and agreement across measures in classifying low performers. Pathophysiological processes involved in schizophrenia appear to act primarily on general abilities required in all tasks rather than on specific abilities within different memory domains and modalities. These effects represent a general shift in the overall distribution of general ability (i.e., each case functioning at a lower level than they would have if not for the illness), rather than presence of a generally low-performing subgroup of patients. There is little evidence that memory impairments in schizophrenia are shared with bipolar disorder and ADHD. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Memory systems in schizophrenia: Modularity is preserved but deficits are generalized

    PubMed Central

    Haut, Kristen M.; Karlsgodt, Katherine H.; Bilder, Robert M.; Congdon, Eliza; Freimer, Nelson; London, Edythe D.; Sabb, Fred W.; Ventura, Joseph; Cannon, Tyrone D.

    2015-01-01

    Objective Schizophrenia patients exhibit impaired working and episodic memory, but this may represent generalized impairment across memory modalities or performance deficits restricted to particular memory systems in subgroups of patients. Furthermore, it is unclear whether deficits are unique from those associated with other disorders. Method Healthy controls (n=1101) and patients with schizophrenia (n=58), bipolar disorder (n=49) and attention-deficit-hyperactivity-disorder (n=46) performed 18 tasks addressing primarily verbal and spatial episodic and working memory. Effect sizes for group contrasts were compared across tasks and the consistency of subjects’ distributional positions across memory domains was measured. Results Schizophrenia patients performed poorly relative to the other groups on every test. While low to moderate correlation was found between memory domains (r=.320), supporting modularity of these systems, there was limited agreement between measures regarding each individual’s task performance (ICC=.292) and in identifying those individuals falling into the lowest quintile (kappa=0.259). A general ability factor accounted for nearly all of the group differences in performance and agreement across measures in classifying low performers. Conclusions Pathophysiological processes involved in schizophrenia appear to act primarily on general abilities required in all tasks rather than on specific abilities within different memory domains and modalities. These effects represent a general shift in the overall distribution of general ability (i.e., each case functioning at a lower level than they would have if not for the illness), rather than presence of a generally low-performing subgroup of patients. There is little evidence that memory impairments in schizophrenia are shared with bipolar disorder and ADHD. PMID:26299707

  9. Effects of motor congruence on visual working memory.

    PubMed

    Quak, Michel; Pecher, Diane; Zeelenberg, Rene

    2014-10-01

    Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.

  10. When we test, do we stress? Impact of the testing environment on cortisol secretion and memory performance in older adults.

    PubMed

    Sindi, Shireen; Fiocco, Alexandra J; Juster, Robert-Paul; Pruessner, Jens; Lupien, Sonia J

    2013-08-01

    The majority of studies find that older adults have worse memory performance than young adults. However, contextual features in the testing environment may be perceived as stressful by older adults, increasing their stress hormone levels. Given the evidence that older adults are highly sensitive to the effects of stress hormones (cortisol) on memory performance, it is postulated that a stressful testing environment in older adults can lead to an acute stress response and to memory impairments. The current study compared salivary cortisol levels and memory performance in young and older adults tested in environments manipulated to be stressful (unfavourable condition) or not stressful (favourable condition) for each age group. 28 young adults and 32 older adults were tested in two testing conditions: (1) a condition favouring young adults (constructed to be less stressful for young adults), and (2) a condition favouring older adults (constructed to be less stressful for older adults). The main outcome measure was salivary cortisol levels. Additionally, immediate and delayed memory performances were assessed during each condition. In older adults only, we found significantly high cortisol levels and low memory performance in the condition favouring young adults. In contrast, cortisol levels were lower and memory performance was better when older adults were tested in conditions favouring them. There was no effect of testing condition in young adults. The results demonstrate that older adults' memory performance is highly sensitive to the testing environment. These findings have important implications for both research and clinical settings in which older adults are tested for memory performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  12. Hardware enabled performance counters with support for operating system context switching

    DOEpatents

    Salapura, Valentina; Wisniewski, Robert W.

    2015-06-30

    A device for supporting hardware enabled performance counters with support for context switching include a plurality of performance counters operable to collect information associated with one or more computer system related activities, a first register operable to store a memory address, a second register operable to store a mode indication, and a state machine operable to read the second register and cause the plurality of performance counters to copy the information to memory area indicated by the memory address based on the mode indication.

  13. Metal oxide resistive random access memory based synaptic devices for brain-inspired computing

    NASA Astrophysics Data System (ADS)

    Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan

    2016-04-01

    The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.

  14. FMRI hypoactivation during verbal learning and memory in former high school football players with multiple concussions.

    PubMed

    Terry, Douglas P; Adams, T Eric; Ferrara, Michael S; Miller, L Stephen

    2015-06-01

    Multiple concussions before the age of 18 may be associated with late-life memory deficits. This study examined neural activation associated with verbal encoding and memory retrieval in former athletes ages 40-65 who received at least two concussions (median = 3; range = 2-15) playing high school football and a group of former high school football players with no reported history of concussions matched on age, education, and pre-morbid IQ. Functional magnetic resonance imaging data collected during a modified verbal paired associates paradigm indicated that those with concussive histories had hypoactivation in left hemispheric language regions, including the inferior/middle frontal gyri and angular gyrus compared with controls. However, concussive history was not associated with worse memory functioning on neuropsychological tests or worse behavioral performance during the paradigm, suggesting that multiple early-life concussions may be associated with subtle changes in the verbal encoding system that limits one from accessing higher-order semantic networks, but this difference does not translate into measurable cognitive performance deficits. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  16. Fornix as an imaging marker for episodic memory deficits in healthy aging and in various neurological disorders

    PubMed Central

    Douet, Vanessa; Chang, Linda

    2015-01-01

    The fornix is a part of the limbic system and constitutes the major efferent and afferent white matter tracts from the hippocampi. The underdevelopment of or injuries to the fornix are strongly associated with memory deficits. Its role in memory impairments was suggested long ago with cases of surgical forniceal transections. However, recent advances in brain imaging techniques, such as diffusion tensor imaging, have revealed that macrostructural and microstructural abnormalities of the fornix correlated highly with declarative and episodic memory performance. This structure appears to provide a robust and early imaging predictor for memory deficits not only in neurodegenerative and neuroinflammatory diseases, such as Alzheimer's disease and multiple sclerosis, but also in schizophrenia and psychiatric disorders, and during neurodevelopment and “typical” aging. The objective of the manuscript is to present a systematic review regarding published brain imaging research on the fornix, including the development of its tracts, its role in various neurological diseases, and its relationship to neurocognitive performance in human studies. PMID:25642186

  17. Massive Memory Revisited: Limitations on Storage Capacity for Object Details in Visual Long-Term Memory

    ERIC Educational Resources Information Center

    Cunningham, Corbin A.; Yassa, Michael A.; Egeth, Howard E.

    2015-01-01

    Previous work suggests that visual long-term memory (VLTM) is highly detailed and has a massive capacity. However, memory performance is subject to the effects of the type of testing procedure used. The current study examines detail memory performance by probing the same memories within the same subjects, but using divergent probing methods. The…

  18. The Sensory Components of High-Capacity Iconic Memory and Visual Working Memory

    PubMed Central

    Bradley, Claire; Pearson, Joel

    2012-01-01

    Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more “high-level” alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful “lower-capacity” visual working memory. PMID:23055993

  19. A High Performance Micro Channel Interface for Real-Time Industrial Image Processing

    Treesearch

    Thomas H. Drayer; Joseph G. Tront; Richard W. Conners

    1995-01-01

    Data collection and transfer devices are critical to the performance of any machine vision system. The interface described in this paper collects image data from a color line scan camera and transfers the data obtained into the system memory of a Micro Channel-based host computer. A maximum data transfer rate of 20 Mbytes/sec can be achieved using the DMA capabilities...

  20. Performance in working memory and attentional control is associated with the rs2180619 SNP in the CNR1 gene.

    PubMed

    Ruiz-Contreras, A E; Carrillo-Sánchez, K; Ortega-Mora, I; Barrera-Tlapa, M A; Román-López, T V; Rosas-Escobar, C B; Flores-Barrera, L; Caballero-Sánchez, U; Muñoz-Torres, Z; Romero-Hidalgo, S; Hernández-Morales, S; González-Barrios, J A; Vadillo-Ortega, F; Méndez-Díaz, M; Aguilar-Roblero, R; Prospéro-García, O

    2014-02-01

    Individual differences in cognitive performance are partly dependent, on genetic polymporhisms. One of the single-nucleotide polymorphisms (SNP) of the CNR1 gene, which codes for cannabinoid receptor 1 (CB1R), is the rs2180619, located in a regulatory region of this gene (6q14-q15). The alleles of the rs2180619 are A > G; the G allele has been associated with addiction and high levels of anxiety (when the G allele interacts with the SS genotype of the 5-HTTLPR gene). However, GG genotype is observed also in healthy subjects. Considering G allele as risk for 'psychopathological conditions', it is possible that GG healthy subjects do not be addicted or anxious, but would have reduced performance, compared to AA subjects, in attentional control and working memory processing. One hundred and sixty-four healthy young Mexican-Mestizo subjects (100 women and 64, men; mean age: 22.86 years, SD=2.72) participated in this study, solving a task where attentional control and working memory were required. GG subjects, compared to AA subjects showed: (1) a general lower performance in the task (P = 0.02); (2) lower performance only when a high load of information was held in working memory (P = 0.02); and (3) a higher vulnerability to distractors (P = 0.03). Our results suggest that, although the performance of GG subjects was at normal levels, a lower efficiency of the endocannabinoid system, probably due to a lowered expression of CB1R, produced a reduction in the performance of these subjects when attentional control and working memory processing is challenged. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  1. Optical memories in digital computing

    NASA Technical Reports Server (NTRS)

    Alford, C. O.; Gaylord, T. K.

    1979-01-01

    High capacity optical memories with relatively-high data-transfer rate and multiport simultaneous access capability may serve as basis for new computer architectures. Several computer structures that might profitably use memories are: a) simultaneous record-access system, b) simultaneously-shared memory computer system, and c) parallel digital processing structure.

  2. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  3. Optical Associative Processors For Visual Perception"

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  4. Evidence for a double dissociation of articulatory rehearsal and non-articulatory maintenance of phonological information in human verbal working memory.

    PubMed

    Trost, Sarah; Gruber, Oliver

    2012-01-01

    Recent functional neuroimaging studies have provided evidence that human verbal working memory is represented by two complementary neural systems, a left lateralized premotor-parietal network implementing articulatory rehearsal and a presumably phylogenetically older bilateral anterior-prefrontal/inferior-parietal network subserving non-articulatory maintenance of phonological information. In order to corroborate these findings from functional neuroimaging, we performed a targeted behavioural study in patients with very selective and circumscribed brain lesions to key regions suggested to support these different subcomponents of human verbal working memory. Within a sample of over 500 neurological patients assessed with high-resolution structural magnetic resonance imaging, we identified 2 patients with corresponding brain lesions, one with an isolated lesion to Broca's area and the other with a selective lesion bilaterally to the anterior middle frontal gyrus. These 2 patients as well as groups of age-matched healthy controls performed two circuit-specific verbal working memory tasks. In this way, we systematically assessed the hypothesized selective behavioural effects of these brain lesions on the different subcomponents of verbal working memory in terms of a double dissociation. Confirming prior findings, the lesion to Broca's area led to reduced performance under articulatory rehearsal, whereas the non-articulatory maintenance of phonological information was unimpaired. Conversely, the bifrontopolar brain lesion was associated with impaired non-articulatory phonological working memory, whereas performance under articulatory rehearsal was unaffected. The present experimental neuropsychological study in patients with specific and circumscribed brain lesions confirms the hypothesized double dissociation of two complementary brain systems underlying verbal working memory in humans. In particular, the results demonstrate the functional relevance of the anterior prefrontal cortex for non-articulatory maintenance of phonological information and, in this way, provide further support for the evolutionary-based functional-neuroanatomical model of human working memory. Copyright © 2012 S. Karger AG, Basel.

  5. LittleQuickWarp: an ultrafast image warping tool.

    PubMed

    Qu, Lei; Peng, Hanchuan

    2015-02-01

    Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Consumption of an acute dose of caffeine reduces acquisition but not memory in the honey bee.

    PubMed

    Mustard, Julie A; Dews, Lauren; Brugato, Arlana; Dey, Kevin; Wright, Geraldine A

    2012-06-15

    Caffeine affects several molecules that are also involved in the processes underlying learning and memory such as cAMP and calcium. However, studies of caffeine's influence on learning and memory in mammals are often contradictory. Invertebrate model systems have provided valuable insight into the actions of many neuroactive compounds including ethanol and cocaine. We use the honey bee (Apis mellifera) to investigate how the ingestion of acute doses of caffeine before, during, and after conditioning influences performance in an appetitive olfactory learning and memory task. Consumption of caffeine doses of 0.01 M or greater during or prior to conditioning causes a significant reduction in response levels during acquisition. Although bees find the taste of caffeine to be aversive at high concentrations, the bitter taste does not explain the reduction in acquisition observed for bees fed caffeine before conditioning. While high doses of caffeine reduced performance during acquisition, the response levels of bees given caffeine were the same as those of the sucrose only control group in a recall test 24h after conditioning. In addition, caffeine administered after conditioning had no affect on recall. These results suggest that caffeine specifically affects performance during acquisition and not the processes involved in the formation of early long term memory. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Identification of Ginkgo biloba supplements adulteration using high performance thin layer chromatography and ultra high performance liquid chromatography-diode array detector-quadrupole time of flight-mass spectometry

    USDA-ARS?s Scientific Manuscript database

    Ginkgo biloba is one of the most widely sold herbal supplements and medicines in the world. Its popularity stems to have a positive effect on memory and the circulatory system in clinical studies. As ginkgo popularity increased, non-proprietary extracts were introduced claiming to have similar phyto...

  8. Semi-automatic sparse preconditioners for high-order finite element methods on non-uniform meshes

    NASA Astrophysics Data System (ADS)

    Austin, Travis M.; Brezina, Marian; Jamroz, Ben; Jhurani, Chetan; Manteuffel, Thomas A.; Ruge, John

    2012-05-01

    High-order finite elements often have a higher accuracy per degree of freedom than the classical low-order finite elements. However, in the context of implicit time-stepping methods, high-order finite elements present challenges to the construction of efficient simulations due to the high cost of inverting the denser finite element matrix. There are many cases where simulations are limited by the memory required to store the matrix and/or the algorithmic components of the linear solver. We are particularly interested in preconditioned Krylov methods for linear systems generated by discretization of elliptic partial differential equations with high-order finite elements. Using a preconditioner like Algebraic Multigrid can be costly in terms of memory due to the need to store matrix information at the various levels. We present a novel method for defining a preconditioner for systems generated by high-order finite elements that is based on a much sparser system than the original high-order finite element system. We investigate the performance for non-uniform meshes on a cube and a cubed sphere mesh, showing that the sparser preconditioner is more efficient and uses significantly less memory. Finally, we explore new methods to construct the sparse preconditioner and examine their effectiveness for non-uniform meshes. We compare results to a direct use of Algebraic Multigrid as a preconditioner and to a two-level additive Schwarz method.

  9. Interference control in working memory: comparing groups of children with atypical development.

    PubMed

    Palladino, Paola; Ferrari, Marcella

    2013-01-01

    The study aimed to test whether working memory deficits in children at risk of Learning Disabilities (LD) and/or attention deficit/hyperactivity disorder (ADHD) can be attributed to deficits in interference control, thereby implicating prefrontal systems. Two groups of children known for showing poor working memory (i.e., children with poor comprehension and children with ADHD) were compared to a group of children with specific reading decoding problems (i.e., having severe problems in phonological rather than working memory) and to a control group. All children were tested with a verbal working memory task. Interference control of irrelevant items was examined by a lexical decision task presented immediately after the final recall in about half the trials, selected at random. The interference control measure was therefore directly related to working memory performance. Results confirmed deficient working memory performance in poor comprehenders and children at risk of ADHD + LD. More interestingly, this working memory deficit was associated with greater activation of irrelevant information than in the control group. Poor decoders showed more efficient interference control, in contrast to poor comprehenders and ADHD + LD children. These results indicated that interfering items were still highly accessible to working memory in children who fail the working memory task. In turn, these findings strengthen and clarify the role of interference control, one of the most critical prefrontal functions, in working memory.

  10. A novel binary shape context for 3D local surface description

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Li, Bijun; Zang, Yufu

    2017-08-01

    3D local surface description is now at the core of many computer vision technologies, such as 3D object recognition, intelligent driving, and 3D model reconstruction. However, most of the existing 3D feature descriptors still suffer from low descriptiveness, weak robustness, and inefficiency in both time and memory. To overcome these challenges, this paper presents a robust and descriptive 3D Binary Shape Context (BSC) descriptor with high efficiency in both time and memory. First, a novel BSC descriptor is generated for 3D local surface description, and the performance of the BSC descriptor under different settings of its parameters is analyzed. Next, the descriptiveness, robustness, and efficiency in both time and memory of the BSC descriptor are evaluated and compared to those of several state-of-the-art 3D feature descriptors. Finally, the performance of the BSC descriptor for 3D object recognition is also evaluated on a number of popular benchmark datasets, and an urban-scene dataset is collected by a terrestrial laser scanner system. Comprehensive experiments demonstrate that the proposed BSC descriptor obtained high descriptiveness, strong robustness, and high efficiency in both time and memory and achieved high recognition rates of 94.8%, 94.1% and 82.1% on the considered UWA, Queen, and WHU datasets, respectively.

  11. Temperature and leakage aware techniques to improve cache reliability

    NASA Astrophysics Data System (ADS)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data stored in the cache to reduce power consumption. The initial work done on this subject focuses on the type of data that increases leakage consumption and ways to manage without impacting the performance of the microprocessor. The second phase of the project focuses on managing the data storage in different blocks of the cache to smooth the leakage power as well as dynamic power consumption. The last technique is a voltage controlled cache to reduce the leakage consumption of the cache while in execution and even in idle state. Two blocks of the 4-way set associative cache go through a voltage regulator before getting to the voltage well, and the other two are directly connected to the voltage well. The idea behind this technique is to use the replacement algorithm information to increase or decrease voltage of the two blocks depending on the need of the information stored on them.

  12. Ringo: Interactive Graph Analytics on Big-Memory Machines

    PubMed Central

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2016-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215

  13. Ringo: Interactive Graph Analytics on Big-Memory Machines.

    PubMed

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2015-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.

  14. The Effects of Physical Exercise and Cognitive Training on Memory and Neurotrophic Factors.

    PubMed

    Heisz, Jennifer J; Clark, Ilana B; Bonin, Katija; Paolucci, Emily M; Michalski, Bernadeta; Becker, Suzanna; Fahnestock, Margaret

    2017-11-01

    This study examined the combined effect of physical exercise and cognitive training on memory and neurotrophic factors in healthy, young adults. Ninety-five participants completed 6 weeks of exercise training, combined exercise and cognitive training, or no training (control). Both the exercise and combined training groups improved performance on a high-interference memory task, whereas the control group did not. In contrast, neither training group improved on general recognition performance, suggesting that exercise training selectively increases high-interference memory that may be linked to hippocampal function. Individuals who experienced greater fitness improvements from the exercise training (i.e., high responders to exercise) also had greater increases in the serum neurotrophic factors brain-derived neurotrophic factor and insulin-like growth factor-1. These high responders to exercise also had better high-interference memory performance as a result of the combined exercise and cognitive training compared with exercise alone, suggesting that potential synergistic effects might depend on the availability of neurotrophic factors. These findings are especially important, as memory benefits accrued from a relatively short intervention in high-functioning young adults.

  15. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yier

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less

  16. Learning disabled and average readers' working memory and comprehension: does metacognition play a role?

    PubMed

    Swanson, H L; Trahan, M

    1996-09-01

    The present study investigates (a) whether learning disabled readers' working memory deficits that underlie poor reading comprehension are related to a general system, and (b) whether metacognition contributes to comprehension beyond what is predicted by working memory and word knowledge. To this end, performance between learning and disabled (N = 60) and average readers (N = 60) was compared on the reading comprehension, reading rate, and vocabulary subtests of the Nelson Skills Reading Test, Sentence Span test composed of high and low imagery words, and a Metacognitive Questionnaire. As expected, differences between groups in working memory, vocabulary, and reading measures emerged, whereas ability groups were statistically comparable on the Metacognitive Questionnaire. A within-group analysis indicated that the correlation patterns between working memory, vocabulary, metacognition, and reading comprehension were not the same between ability groups. For predicting reading comprehension, the metacognitive questionnaire best predicted learning disabled readers' performance, whereas the working memory span measure that included low-imagery words best predicted average achieving readers' comprehension. Overall, the results suggest that the relationship between learning disabled readers' generalised working memory deficits and poor reading comprehension may be mediated by metacognition.

  17. Effect of visual and tactile feedback on kinematic synergies in the grasping hand.

    PubMed

    Patel, Vrajeshri; Burns, Martin; Vinjamuri, Ramana

    2016-08-01

    The human hand uses a combination of feedforward and feedback mechanisms to accomplish high degree of freedom in grasp control efficiently. In this study, we used a synergy-based control model to determine the effect of sensory feedback on kinematic synergies in the grasping hand. Ten subjects performed two types of grasps: one that included feedback (real) and one without feedback (memory-guided), at two different speeds (rapid and natural). Kinematic synergies were extracted from rapid real and rapid memory-guided grasps using principal component analysis. Synergies extracted from memory-guided grasps revealed greater preservation of natural inter-finger relationships than those found in corresponding synergies extracted from real grasps. Reconstruction of natural real and natural memory-guided grasps was used to test performance and generalizability of synergies. A temporal analysis of reconstruction patterns revealed the differing contribution of individual synergies in real grasps versus memory-guided grasps. Finally, the results showed that memory-guided synergies could not reconstruct real grasps as accurately as real synergies could reconstruct memory-guided grasps. These results demonstrate how visual and tactile feedback affects a closed-loop synergy-based motor control system.

  18. Shape memory alloy resistance behaviour at high altitude for feedback control

    NASA Astrophysics Data System (ADS)

    Ng, W. T.; Sedan, M. F.; Abdullah, E. J.; Azrad, S.; Harithuddin, A. S. M.

    2017-12-01

    Many recent aerospace technologies are using smart actuators to reduce the system's complexity and increase its reliability. One such actuator is shape memory alloy (SMA) actuator, which is lightweight, produces high force and large deflection. However, some disadvantages in using SMA actuators have been identified and they include nonlinear response of the strain to input current, hysteresis characteristic that results in inaccurate control and less than optimum system performance, high operating temperatures, slow response and also high requirement of electrical power to obtain the desired actuation forces. It is still unknown if the SMA actuators can perform effectively at high altitude with low surrounding temperature. The work presented here covers the preliminary process of verifying the feasibility of using resistance as feedback control at high altitude for aerospace applications. Temperature and resistance of SMA actuator at high altitude is investigated by conducting an experiment onboard a high altitude balloon. The results from the high altitude experiment indicate that the resistance or voltage drop of the SMA wire is not significantly affected by the low surrounding temperature at high altitude as compared to the temperature of SMA. Resistance feedback control for SMA actuators may be suitable for aerospace applications.

  19. Effects of Steady-State Noise on Verbal Working Memory in Young Adults

    PubMed Central

    Alt, Mary; DeDe, Gayle; Olson, Sarah; Shehorn, James

    2015-01-01

    Purpose We set out to examine the impact of perceptual, linguistic, and capacity demands on performance of verbal working-memory tasks. The Ease of Language Understanding model (Rönnberg et al., 2013) provides a framework for testing the dynamics of these interactions within the auditory-cognitive system. Methods Adult native speakers of English (n = 45) participated in verbal working-memory tasks requiring processing and storage of words involving different linguistic demands (closed/open set). Capacity demand ranged from 2 to 7 words per trial. Participants performed the tasks in quiet and in speech-spectrum-shaped noise. Separate groups of participants were tested at different signal-to-noise ratios. Word-recognition measures were obtained to determine effects of noise on intelligibility. Results Contrary to predictions, steady-state noise did not have an adverse effect on working-memory performance in every situation. Noise negatively influenced performance for the task with high linguistic demand. Of particular importance is the finding that the adverse effects of background noise were not confined to conditions involving declines in recognition. Conclusions Perceptual, linguistic, and cognitive demands can dynamically affect verbal working-memory performance even in a population of healthy young adults. Results suggest that researchers and clinicians need to carefully analyze task demands to understand the independent and combined auditory-cognitive factors governing performance in everyday listening situations. PMID:26384291

  20. Temporal context memory in high-functioning autism.

    PubMed

    Gras-Vincendon, Agnès; Mottron, Laurent; Salamé, Pierre; Bursztejn, Claude; Danion, Jean-Marie

    2007-11-01

    Episodic memory, i.e. memory for specific episodes situated in space and time, seems impaired in individuals with autism. According to weak central coherence theory, individuals with autism have general difficulty connecting contextual and item information which then impairs their capacity to memorize information in context. This study investigated temporal context memory for visual information in individuals with autism. Eighteen adolescents and adults with high-functioning autism (HFA) or Asperger syndrome (AS) and age- and IQ-matched typically developing participants were tested using a recency judgement task. The performance of the autistic group did not differ from that of the control group, nor did the performance between the AS and HFA groups. We conclude that autism in high-functioning individuals does not impair temporal context memory as assessed on this task. We suggest that individuals with autism are as efficient on this task as typically developing subjects because contextual memory performance here involves more automatic than organizational processing.

  1. New data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer

    NASA Astrophysics Data System (ADS)

    Tamii, A.; Sakaguchi, H.; Takeda, H.; Yosoi, M.; Akimune, H.; Fujiwara, M.; Ogata, H.; Tanaka, M.; Togawa, H.

    1996-10-01

    This paper describes a new data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer at the Research Center for Nuclear Physics (RCNP) in Osaka, Japan. Data are acquired by a Creative Electronic Systems (CES) Starburst, which is a CAMAC auxiliary crate controller equipped with a Digital Equipment Corporation (DEC) J11 microprocessor. The data on the Starburst are transferred to a VME single-board computer. A VME reflective memory module broadcasts the data to other systems through a fiber-optic link. A data transfer rate of 2.0 Mbytes/s between VME modules has been achieved by reflective memories. This rate includes the overhead of buffer management. The overall transfer rate, however, is limited by the performance of the Starburst to about 160 Kbytes/s at maximum. In order to further improve the system performance, we developed a new readout module called the Rapid Data Transfer Module (RDTM). RDTM's transfer data from LeCroy PCOS III's or 4298's, and FERA/FERET's directly to CES 8170 High Speed Memories (HSM) in VME crates, the data transfer rate of the RDTM from PCOS III's to the HSM is about 4 Mbytes/s.

  2. Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.

    PubMed

    Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas

    2015-08-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease

    PubMed Central

    Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas

    2016-01-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443

  4. PCI-based WILDFIRE reconfigurable computing engines

    NASA Astrophysics Data System (ADS)

    Fross, Bradley K.; Donaldson, Robert L.; Palmer, Douglas J.

    1996-10-01

    WILDFORCE is the first PCI-based custom reconfigurable computer that is based on the Splash 2 technology transferred from the National Security Agency and the Institute for Defense Analyses, Supercomputing Research Center (SRC). The WILDFORCE architecture has many of the features of the WILDFIRE computer, such as field- programmable gate array (FPGA) based processing elements, linear array and crossbar interconnection, and high- performance memory and I/O subsystems. New features introduced in the PCI-based WILDFIRE systems include memory/processor options that can be added to any processing element. These options include static and dynamic memory, digital signal processors (DSPs), FPGAs, and microprocessors. In addition to memory/processor options, many different application specific connectors can be used to extend the I/O capabilities of the system, including systolic I/O, camera input and video display output. This paper also discusses how this new PCI-based reconfigurable computing engine is used for rapid-prototyping, real-time video processing and other DSP applications.

  5. Interactive communication channel

    NASA Astrophysics Data System (ADS)

    Chan, R. H.; Mann, M. R.; Ciarrocchi, J. A.

    1985-10-01

    Discussed is an interactive communications channel (ICC) for providing a digital computer with high-performance multi-channel interfaces. Sixteen full duplex channels can be serviced in the ICC with the sequence or scan pattern being programmable and dependent upon the number or channels and their speed. A channel buffer system is used for line interface, and character exchange. The channel buffer system is on a byte basis. The ICC performs frame start and frame end functions, bit stripping and bit stuffing. Data is stored in a memory in block format (256 bytes maximum) by a program control and the ICC maintains byte address information and a block byte count. Data exchange with a memory is made by cycle steals. Error detection is also provided for using a cyclic redundancy check technique.

  6. Influence of anxiety on memory performance in temporal lobe epilepsy

    PubMed Central

    Brown, Franklin C.; Westerveld, Michael; Langfitt, John T.; Hamberger, Marla; Hamid, Hamada; Shinnar, Shlomo; Sperling, Michael R.; Devinsky, Orrin; Barr, William; Tracy, Joseph; Masur, David; Bazil, Carl W.; Spencer, Susan S.

    2013-01-01

    This study examined the degree to which anxiety contributed to inconsistent material-specific memory difficulties among 243 temporal lobe epilepsy patients from the Multisite Epilepsy Study. Visual memory performance on the Rey Complex Figure Test (RCFT) was lower for those with high versus low level of anxiety, but was not found to be related to side of TLE. Verbal memory on the California Verbal Learning Test (CVLT) was significantly lower for left than right TLE patients with low anxiety, but equally impaired for those with high anxiety. These results suggest that we can place more confidence in the ability of verbal memory tests like the CVLT to lateralize to left TLE for those with low anxiety, but that verbal memory will be less likely to produce lateralizing information for those with high anxiety. This suggests that more caution is needed when interpreting verbal memory tests for those with high anxiety. These results indicated that RCFT performance was significantly affected by anxiety and did not lateralize to either side, regardless of anxiety level. This study adds to the existing literature which suggests that drawing-based visual memory tests do not lateralize among TLE patients, regardless of anxiety level. PMID:24291525

  7. Neurobiological findings associated with high cognitive performance in older adults: a systematic review.

    PubMed

    Borelli, Wyllians Vendramini; Schilling, Lucas Porcello; Radaelli, Graciane; Ferreira, Luciana Borges; Pisani, Leonardo; Portuguez, Mirna Wetters; da Costa, Jaderson Costa

    2018-04-18

    ABSTRACTObjectives:to perform a comprehensive literature review of studies on older adults with exceptional cognitive performance. We performed a systematic review using two major databases (MEDLINE and Web of Science) from January 2002 to November 2017. Quantitative analysis included nine of 4,457 studies and revealed that high-performing older adults have global preservation of the cortex, especially the anterior cingulate region, and hippocampal volumes larger than normal agers. Histological analysis of this group also exhibited decreased amyloid burden and neurofibrillary tangles compared to cognitively normal older controls. High performers that maintained memory ability after three years showed reduced amyloid positron emission tomography at baseline compared with high performers that declined. A single study on blood plasma found a set of 12 metabolites predicting memory maintenance of this group. Structural and molecular brain preservation of older adults with high cognitive performance may be associated with brain maintenance. The operationalized definition of high-performing older adults must be carefully addressed using appropriate age cut-off and cognitive evaluation, including memory and non-memory tests. Further studies with a longitudinal approach that include a younger control group are essential.

  8. Checkpoint repair for high-performance out-of-order execution machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwu, W.M.W.; Patt, Y.N.

    Out-or-order execution and branch prediction are two mechanisms that can be used profitably in the design of supercomputers to increase performance. Proper exception handling and branch prediction miss handling in an out-of-order execution machine to require some kind of repair mechanism which can restore the machine to a known previous state. In this paper the authors present a class of repair mechanisms using the concept of checkpointing. The authors derive several properties of checkpoint repair mechanisms. In addition, they provide algorithms for performing checkpoint repair that incur little overhead in time and modest cost in hardware, which also require nomore » additional complexity or time for use with write-back cache memory systems than they do with write-through cache memory systems, contrary to statements made by previous researchers.« less

  9. Both a Nicotinic Single Nucleotide Polymorphism (SNP) and a Noradrenergic SNP Modulate Working Memory Performance when Attention Is Manipulated

    ERIC Educational Resources Information Center

    Greenwood, Pamela M.; Sundararajan, Ramya; Lin, Ming-Kuan; Kumar, Reshma; Fryxell, Karl J.; Parasuraman, Raja

    2009-01-01

    We investigated the relation between the two systems of visuospatial attention and working memory by examining the effect of normal variation in cholinergic and noradrenergic genes on working memory performance under attentional manipulation. We previously reported that working memory for location was impaired following large location precues,…

  10. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig Michael; Verzi, Stephen Joseph

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less

  11. Enhanced oscillatory activity in the hippocampal-prefrontal network is related to short-term memory function after early-life seizures

    PubMed Central

    Kleen, Jonathan K.; Wu, Edie X.; Holmes, Gregory L.; Scott, Rod C.; Lenck-Santini, Pierre-Pascal

    2011-01-01

    Neurological insults during development are associated with later impairments in learning and memory. Although remedial training can help restore cognitive function, the neural mechanisms of this recovery in memory systems are largely unknown. To examine this issue we measured electrophysiological oscillatory activity in the hippocampus (both CA3 and CA1) and prefrontal cortex of adult rats that had experienced repeated seizures in the first weeks of life, while they were remedially trained on a delayed-nonmatch-to-sample memory task. Seizure-exposed rats showed initial difficulties learning the task but performed similar to control rats after extra training. Whole-session analyses illustrated enhanced theta power in all three structures while seizure rats learned response tasks prior to the memory task. Whilst performing the memory task, dynamic oscillation patterns revealed that prefrontal cortex theta power was increased among seizure-exposed rats. This enhancement appeared after the first memory training steps using short delays and plateaued at the most difficult steps which included both short and long delays. Further, seizure rats showed enhanced CA1-prefrontal theta coherence in correct trials compared to incorrect trials when long delays were imposed, suggesting increased hippocampal-prefrontal synchrony for the task in this group when memory demand was high. Seizure-exposed rats also showed heightened gamma power and coherence among all three structures during the trials. Our results demonstrate the first evidence of hippocampal-prefrontal enhancements following seizures in early development. Dynamic compensatory changes in this network and interconnected circuits may underpin cognitive rehabilitation following other neurological insults to higher cognitive systems. PMID:22031886

  12. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  13. Spatial Navigation Impairments Among Intellectually High-Functioning Adults With Autism Spectrum Disorder: Exploring Relations With Theory of Mind, Episodic Memory, and Episodic Future Thinking

    PubMed Central

    2013-01-01

    Research suggests that spatial navigation relies on the same neural network as episodic memory, episodic future thinking, and theory of mind (ToM). Such findings have stimulated theories (e.g., the scene construction and self-projection hypotheses) concerning possible common underlying cognitive capacities. Consistent with such theories, autism spectrum disorder (ASD) is characterized by concurrent impairments in episodic memory, episodic future thinking, and ToM. However, it is currently unclear whether spatial navigation is also impaired. Hence, ASD provides a test case for the scene construction and self-projection theories. The study of spatial navigation in ASD also provides a test of the extreme male brain theory of ASD, which predicts intact or superior navigation (purportedly a systemizing skill) performance among individuals with ASD. Thus, the aim of the current study was to establish whether spatial navigation in ASD is impaired, intact, or superior. Twenty-seven intellectually high-functioning adults with ASD and 28 sex-, age-, and IQ-matched neurotypical comparison adults completed the memory island virtual navigation task. Tests of episodic memory, episodic future thinking, and ToM were also completed. Participants with ASD showed significantly diminished performance on the memory island task, and performance was positively related to ToM and episodic memory, but not episodic future thinking. These results suggest that (contra the extreme male brain theory) individuals with ASD have impaired survey-based navigation skills—that is, difficulties generating cognitive maps of the environment—and adds weight to the idea that scene construction/self-projection are impaired in ASD. The theoretical and clinical implications of these results are discussed. PMID:24364620

  14. High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation

    DTIC Science & Technology

    2016-11-01

    Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Kathryn Esham, Luis Bravo, Anindya Ghoshal, Muthuvel Murugan, and Michael...Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Luis Bravo, Anindya Ghoshal, Muthuvel...High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation 5a

  15. Sleep spindles during a nap correlate with post sleep memory performance for highly rewarded word-pairs.

    PubMed

    Studte, Sara; Bridger, Emma; Mecklinger, Axel

    2017-04-01

    The consolidation of new associations is thought to depend in part on physiological processes engaged during non-REM (NREM) sleep, such as slow oscillations and sleep spindles. Moreover, NREM sleep is thought to selectively benefit associations that are adaptive for the future. In line with this, the current study investigated whether different reward cues at encoding are associated with changes in sleep physiology and memory retention. Participants' associative memory was tested after learning a list of arbitrarily paired words both before and after taking a 90-min nap. During learning, word-pairs were preceded by a cue indicating either a high or a low reward for correct memory performance at test. The motivation manipulation successfully impacted retention such that memory declined to a greater extent from pre- to post sleep for low rewarded than for high rewarded word-pairs. In line with previous studies, positive correlations between spindle density during NREM sleep and general memory performance pre- and post-sleep were found. In addition to this, however, a selective positive relationship between memory performance for highly rewarded word-pairs at posttest and spindle density during NREM sleep was also observed. These results support the view that motivationally salient memories are preferentially consolidated and that sleep spindles may be an important underlying mechanism for selective consolidation. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.

  17. The Influence of Colour on Memory Performance: A Review

    PubMed Central

    Dzulkifli, Mariam Adawiah; Mustafar, Muhammad Faiz

    2013-01-01

    Human cognition involves many mental processes that are highly interrelated, such as perception, attention, memory, and thinking. An important and core cognitive process is memory, which is commonly associated with the storing and remembering of environmental information. An interesting issue in memory research is on ways to enhance memory performance, and thus, remembering of information. Can colour result in improved memory abilities? The present paper highlights the relationship between colours, attention, and memory performance. The significance of colour in different settings is presented first, followed by a description on the nature of human memory. The role of attention and emotional arousal on memory performance is discussed next. The review of several studies on colours and memory are meant to explain some empirical works done in the area and related issues that arise from such studies. PMID:23983571

  18. The influence of colour on memory performance: a review.

    PubMed

    Dzulkifli, Mariam Adawiah; Mustafar, Muhammad Faiz

    2013-03-01

    Human cognition involves many mental processes that are highly interrelated, such as perception, attention, memory, and thinking. An important and core cognitive process is memory, which is commonly associated with the storing and remembering of environmental information. An interesting issue in memory research is on ways to enhance memory performance, and thus, remembering of information. Can colour result in improved memory abilities? The present paper highlights the relationship between colours, attention, and memory performance. The significance of colour in different settings is presented first, followed by a description on the nature of human memory. The role of attention and emotional arousal on memory performance is discussed next. The review of several studies on colours and memory are meant to explain some empirical works done in the area and related issues that arise from such studies.

  19. Effects of Δ9-tetrahydrocannabinol administration on human encoding and recall memory function: a pharmacological FMRI study.

    PubMed

    Bossong, Matthijs G; Jager, Gerry; van Hell, Hendrika H; Zuurman, Lineke; Jansma, J Martijn; Mehta, Mitul A; van Gerven, Joop M A; Kahn, René S; Ramsey, Nick F

    2012-03-01

    Deficits in memory function are an incapacitating aspect of various psychiatric and neurological disorders. Animal studies have recently provided strong evidence for involvement of the endocannabinoid (eCB) system in memory function. Neuropsychological studies in humans have shown less convincing evidence but suggest that administration of cannabinoid substances affects encoding rather than recall of information. In this study, we examined the effects of perturbation of the eCB system on memory function during both encoding and recall. We performed a pharmacological MRI study with a placebo-controlled, crossover design, investigating the effects of Δ9-tetrahydrocannabinol (THC) inhalation on associative memory-related brain function in 13 healthy volunteers. Performance and brain activation during associative memory were assessed using a pictorial memory task, consisting of separate encoding and recall conditions. Administration of THC caused reductions in activity during encoding in the right insula, the right inferior frontal gyrus, and the left middle occipital gyrus and a network-wide increase in activity during recall, which was most prominent in bilateral cuneus and precuneus. THC administration did not affect task performance, but while during placebo recall activity significantly explained variance in performance, this effect disappeared after THC. These findings suggest eCB involvement in encoding of pictorial information. Increased precuneus activity could reflect impaired recall function, but the absence of THC effects on task performance suggests a compensatory mechanism. These results further emphasize the eCB system as a potential novel target for treatment of memory disorders and a promising target for development of new therapies to reduce memory deficits in humans.

  20. Phenylethanoid glycosides of Pedicularis muscicola Maxim ameliorate high altitude-induced memory impairment.

    PubMed

    Zhou, Baozhu; Li, Maoxing; Cao, Xinyuan; Zhang, Quanlong; Liu, Yantong; Ma, Qiang; Qiu, Yan; Luan, Fei; Wang, Xianmin

    2016-04-01

    Exposure to hypobaric hypoxia causes oxidative stress, neuronal degeneration and apoptosis that leads to memory impairment. Though oxidative stress contributes to neuronal degeneration and apoptosis in hypobaric hypoxia, the ability for phenylethanoid glycosides of Pedicularis muscicola Maxim (PhGs) to reverse high altitude memory impairment has not been studied. Rats were supplemented with PhGs orally for a week. After the fourth day of drug administration, rats were exposed to a 7500 m altitude simulation in a specially designed animal decompression chamber for 3 days. Spatial memory was assessed by the 8-arm radial maze test before and after exposure to hypobaric hypoxia. Histological assessment of neuronal degeneration was performed by hematoxylin-eosin (HE) staining. Changes in oxidative stress markers and changes in the expression of the apoptotic marker, caspase-3, were assessed in the hippocampus. Our results demonstrated that after exposure to hypobaric hypoxia, PhGs ameliorated high altitude memory impairment, as shown by the decreased values obtained for reference memory error (RME), working memory error (WME), and total error (TE). Meanwhile, administration of PhGs decreased hippocampal reactive oxygen species levels and consequent lipid peroxidation by elevating reduced glutathione levels and enhancing the free radical scavenging enzyme system. There was also a decrease in the number of pyknotic neurons and a reduction in caspase-3 expression in the hippocampus. These findings suggest that PhGs may be used therapeutically to ameliorate high altitude memory impairment. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Word frequency influences on the list length effect and associative memory in young and older adults.

    PubMed

    Badham, Stephen P; Whitney, Cora; Sanghera, Sumeet; Maylor, Elizabeth A

    2017-07-01

    Many studies show that age deficits in memory are smaller for information supported by pre-experimental experience. Many studies also find dissociations in memory tasks between words that occur with high and low frequencies in language, but the literature is mixed regarding the extent of word frequency effects in normal ageing. We examined whether age deficits in episodic memory could be influenced by manipulations of word frequency. In Experiment 1, young and older adults studied short and long lists of high- and low-frequency words for free recall. The list length effect (the drop in proportion recalled for longer lists) was larger in young compared to older adults and for high- compared to low-frequency words. In Experiment 2, young and older adults completed item and associative recognition memory tests with high- and low-frequency words. Age deficits were greater for associative memory than for item memory, demonstrating an age-related associative deficit. High-frequency words led to better associative memory performance whilst low-frequency words resulted in better item memory performance. In neither experiment was there any evidence for age deficits to be smaller for high- relative to low-frequency words, suggesting that word frequency effects on memory operate independently from effects due to cognitive ageing.

  2. Smart photodetector arrays for error control in page-oriented optical memory

    NASA Astrophysics Data System (ADS)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.

  3. High performance data transfer

    NASA Astrophysics Data System (ADS)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  4. The effect of mild acute stress during memory consolidation on emotional recognition memory.

    PubMed

    Corbett, Brittany; Weinberg, Lisa; Duarte, Audrey

    2017-11-01

    Stress during consolidation improves recognition memory performance. Generally, this memory benefit is greater for emotionally arousing stimuli than neutral stimuli. The strength of the stressor also plays a role in memory performance, with memory performance improving up to a moderate level of stress and thereafter worsening. As our daily stressors are generally minimal in strength, we chose to induce mild acute stress to determine its effect on memory performance. In the current study, we investigated if mild acute stress during consolidation improves memory performance for emotionally arousing images. To investigate this, we had participants encode highly arousing negative, minimally arousing negative, and neutral images. We induced stress using the Montreal Imaging Stress Task (MIST) in half of the participants and a control task to the other half of the participants directly after encoding (i.e. during consolidation) and tested recognition 48h later. We found no difference in memory performance between the stress and control group. We found a graded pattern among confidence, with responders in the stress group having the least amount of confidence in their hits and controls having the most. Across groups, we found highly arousing negative images were better remembered than minimally arousing negative or neutral images. Although stress did not affect memory accuracy, responders, as defined by cortisol reactivity, were less confident in their decisions. Our results suggest that the daily stressors humans experience, regardless of their emotional affect, do not have adverse effects on memory. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. High-Density, High-Bandwidth, Multilevel Holographic Memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    2008-01-01

    A proposed holographic memory system would be capable of storing data at unprecedentedly high density, and its data transfer performance in both reading and writing would be characterized by exceptionally high bandwidth. The capabilities of the proposed system would greatly exceed even those of a state-of-the art memory system, based on binary holograms (in which each pixel value represents 0 or 1), that can hold .1 terabyte of data and can support a reading or writing rate as high as 1 Gb/s. The storage capacity of the state-of-theart system cannot be increased without also increasing the volume and mass of the system. However, in principle, the storage capacity could be increased greatly, without significantly increasing the volume and mass, if multilevel holograms were used instead of binary holograms. For example, a 3-bit (8-level) hologram could store 8 terabytes, or an 8-bit (256-level) hologram could store 256 terabytes, in a system having little or no more size and mass than does the state-of-the-art 1-terabyte binary holographic memory. The proposed system would utilize multilevel holograms. The system would include lasers, imaging lenses and other beam-forming optics, a block photorefractive crystal wherein the holograms would be formed, and two multilevel spatial light modulators in the form of commercially available deformable-mirror-device spatial light modulators (DMDSLMs) made for use in high speed input conversion of data up to 12 bits. For readout, the system would also include two arrays of complementary metal oxide/semiconductor (CMOS) photodetectors matching the spatial light modulators. The system would further include a reference-beam sterring device (equivalent of a scanning mirror), containing no sliding parts, that could be either a liquid-crystal phased-array device or a microscopic mirror actuated by a high-speed microelectromechanical system. Time-multiplexing and the multilevel nature of the DMDSLM would be exploited to enable writing and reading of multilevel holograms. The DMDSLM would also enable transfer of data at a rate of 7.6 Gb/s or perhaps somewhat higher.

  6. Memory outcomes following cognitive interventions in children with neurological deficits: A review with a focus on under-studied populations.

    PubMed

    Schaffer, Yael; Geva, Ronny

    2016-01-01

    Given the primary role of memory in children's learning and well-being, the aim of this review was to examine the outcomes of memory remediation interventions in children with neurological deficits as a function of the affected memory system and intervention method. Fifty-seven studies that evaluated the outcome of memory interventions in children were identified. Thirty-four studies met the inclusion criteria, and were included in a systematic review. Diverse rehabilitation methods for improving explicit and implicit memory in children were reviewed. The analysis indicates that teaching restoration strategies may improve, and result in the generalisation of, semantic memory and working memory performance in children older than 7 years with mild to moderate memory deficits. Factors such as longer protocols, emotional support, and personal feedback contribute to intervention efficacy. In addition, the use of compensation aids seems to be highly effective in prospective memory tasks. Finally, the review unveiled a lack of studies with young children and the absence of group interventions. These findings point to the importance of future evidence-based intervention protocols in these areas.

  7. Declarative memory performance is associated with the number of sleep spindles in elderly women.

    PubMed

    Seeck-Hirschner, Mareen; Baier, Paul Christian; Weinhold, Sara Lena; Dittmar, Manuela; Heiermann, Steffanie; Aldenhoff, Josef B; Göder, Robert

    2012-09-01

    Recent evidence suggests that the sleep-dependent consolidation of declarative memory relies on the nonrapid eye movement rather than the rapid eye movement phase of sleep. In addition, it is known that aging is accompanied by changes in sleep and memory processes. Hence, the purpose of this study was to investigate the overnight consolidation of declarative memory in healthy elderly women. Sleep laboratory of University. Nineteen healthy elderly women (age range: 61-74 years). We used laboratory-based measures of sleep. To test declarative memory, the Rey-Osterrieth Complex Figure Test was performed. Declarative memory performance in elderly women was associated with Stage 2 sleep spindle density. Women characterized by high memory performance exhibited significantly higher numbers of sleep spindles and higher spindle density compared with women with generally low memory performance. The data strongly support theories suggesting a link between sleep spindle activity and declarative memory consolidation.

  8. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  9. Achieving enlightenment: what do we know about the implicit learning system and its interaction with explicit knowledge?

    PubMed

    Vidoni, Eric D; Boyd, Lara A

    2007-09-01

    Two major memory and learning systems operate in the brain: one for facts and ideas (ie, the declarative or explicit system), one for habits and behaviors (ie, the procedural or implicit system). Broadly speaking these two memory systems can operate either in concert or entirely independently of one another during the performance and learning of skilled motor behaviors. This Special Issue article has two parts. In the first, we present a review of implicit motor skill learning that is largely centered on the interactions between declarative and procedural learning and memory. Because distinct neuroanatomical substrates support unique aspects of learning and memory and thus focal injury can cause impairments that are dependent on lesion location, we also broadly consider which brain regions mediate implicit and explicit learning and memory. In the second part of this article, the interactive nature of these two memory systems is illustrated by the presentation of new data that reveal that both learning implicitly and acquiring explicit knowledge through physical practice lead to motor sequence learning. In our new data, we discovered that for healthy individuals use of the implicit versus explicit memory system differently affected variability of performance during acquisition practice; variability was higher early in practice for the implicit group and later in practice for the acquired explicit group. Despite the difference in performance variability, by retention both groups demonstrated comparable change in tracking accuracy and thus, motor sequence learning. Clinicians should be aware of the potential effects of implicit and explicit interactions when designing rehabilitation interventions, particularly when delivering explicit instructions before task practice, working with individuals with focal brain damage, and/or adjusting therapeutic parameters based on acquisition performance variability.

  10. Threshold relationship between lesion extent of the cholinergic basal forebrain in the rat and working memory impairment in the radial maze.

    PubMed

    Wrenn, C C; Lappi, D A; Wiley, R G

    1999-11-20

    The cholinergic basal forebrain (CBF) degenerates in Alzheimer's Disease (AD), and the degree of this degeneration correlates with the degree of dementia. In the present study we have modeled this degeneration in the rat by injecting various doses of the highly selective immunotoxin 192 IgG-saporin (192-sap) into the ventricular system. The ability of 192-sap-treated rats to perform in a previously learned radial maze working memory task was then tested. We report here that 192-sap created lesions of the CBF and, to a lesser extent, cerebellar Purkinje cells in a dose-dependent fashion. Furthermore, we found that rats harboring lesions of the entire CBF greater than 75% had impaired spatial working memory in the radial maze. Correlational analysis of working memory impairment and lesion extent of the component parts of the CBF revealed that high-grade lesions of the hippocampal-projecting neurons of the CBF were not sufficient to impair working memory. Only rats with high-grade lesions of the hippocampal and cortical projecting neurons of the CBF had impaired working memory. These data are consistent with other 192-sap reports that found behavioral deficits only with high-grade CBF lesions and indicate that the relationship between CBF lesion extent and working memory impairment is a threshold relationship in which a high degree of neuronal loss can be tolerated without detectable consequences. Additionally, the data suggest that the CBF modulates spatial working memory via its connections to both the hippocampus and cortex.

  11. Visual-spatial processing and working-memory load as a function of negative and positive psychotic-like experiences.

    PubMed

    Abu-Akel, A; Reniers, R L E P; Wood, S J

    2016-09-01

    Patients with schizophrenia show impairments in working-memory and visual-spatial processing, but little is known about the dynamic interplay between the two. To provide insight into this important question, we examined the effect of positive and negative symptom expressions in healthy adults on perceptual processing while concurrently performing a working-memory task that requires the allocations of various degrees of cognitive resources. The effect of positive and negative symptom expressions in healthy adults (N = 91) on perceptual processing was examined in a dual-task paradigm of visual-spatial working memory (VSWM) under three conditions of cognitive load: a baseline condition (with no concurrent working-memory demand), a low VSWM load condition, and a high VSWM load condition. Participants overall performed more efficiently (i.e., faster) with increasing cognitive load. This facilitation in performance was unrelated to symptom expressions. However, participants with high-negative, low-positive symptom expressions were less accurate in the low VSWM condition compared to the baseline and the high VSWM load conditions. Attenuated, subclinical expressions of psychosis affect cognitive performance that is impaired in schizophrenia. The "resource limitations hypothesis" may explain the performance of the participants with high-negative symptom expressions. The dual-task of visual-spatial processing and working memory may be beneficial to assessing the cognitive phenotype of individuals with high risk for schizophrenia spectrum disorders.

  12. Blackcomb: Hardware-Software Co-design for Non-Volatile Memory in Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiber, Robert

    Summary of technical results of Blackcomb Memory Devices We explored various different memory technologies (STTRAM, PCRAM, FeRAM, and ReRAM). The progress can be classified into three categories, below. Modeling and Tool Releases Various modeling tools have been developed over the last decade to help in the design of SRAM or DRAM-based memory hierarchies. To explore new design opportunities that NVM technologies can bring to the designers, we have developed similar high-level models for NVM, including PCRAMsim [Dong 2009], NVSim [Dong 2012], and NVMain [Poremba 2012]. NVSim is a circuit-level model for NVM performance, energy, and area estimation, which supports variousmore » NVM technologies, including STT-RAM, PCRAM, ReRAM, and legacy NAND Flash. NVSim is successfully validated against industrial NVM prototypes, and it is expected to help boost architecture-level NVM-related studies. On the other side, NVMain is a cycle accurate main memory simulator designed to simulate emerging nonvolatile memories at the architectural level. We have released these models as open source tools and provided contiguous support to them. We also proposed PS3-RAM, which is a fast, portable and scalable statistical STT-RAM reliability analysis model [Wen 2012]. Design Space Exploration and Optimization With the support of these models, we explore different device/circuit optimization techniques. For example, in [Niu 2012a] we studied the power reduction technique for the application of ECC scheme in ReRAM designs and proposed to use ECC code to relax the BER (Bit Error Rate) requirement of a single memory to improve the write energy consumption and latency for both 1T1R and cross-point ReRAM designs. In [Xu 2011], we proposed a methodology to design STT-RAM for different optimization goals such as read performance, write performance and write energy by leveraging the trade-off between write current and write time of MTJ. We also studied the tradeoffs in building a reliable crosspoint ReRAM array [Niu 2012b]. We have conducted an in depth analysis of the circuit and system level design implications of multi-layer cross-point Resistive RAM (MLCReRAM) from performance, power and reliability perspectives [Xu 2013]. The objective of this study is to understand the design trade-offs of this technology with respect to the MLC Phase Change Memory (MLCPCM).Our MLC ReRAM design at the circuit and system levels indicates that different resistance allocation schemes, programming strategies, peripheral designs, and material selections profoundly affect the area, latency, power, and reliability of MLC ReRAM. Based on this analysis, we conduct two case studies: first we compare MLC ReRAM design against MLC phase-change memory (PCM) and multi-layer cross-point ReRAM design, and point out why multi-level ReRAM is appealing; second we further explore the design space for MLC ReRAM. Architecture and Application We explored hybrid checkpointing using phase-change memory for future exascale systems [Dong 2011] and showed that the use of nonvolatile memory for local checkpointing significantly increases the number of faults covered by local checkpoints and reduces the probability of a global failure in the middle of a global checkpoint to less than 1%. We also proposed a technique called i2WAP to mitigate the write variations in NVM-based last-level cache for the improvement of the NVM lifetime [Wang 2013]. Our wear leveling technique attempts to work around the limitations of write endurance by arranging data access so that write operations can be distributed evenly across all the storage cells. During our intensive research on fault-tolerant NVM design, we found that ECC cannot effectively tolerate hard errors from limited write endurance and process imperfection. Therefore, we devised a novel Point and Discard (PAD) architecture in in [ 2012] as a hard-error-tolerant architecture for ReRAM-based Last Level Caches. PAD improves the lifetime of ReRAM caches by 1.6X-440X under different process variations without performance overhead in the system's early life. We have investigated the applicability of NVM for persistent memory design [Zhao 2013]. New byte addressable NVM enables fast persistent memory that allows in-memory persistent data objects to be updated with much higher throughput. Despite the significant improvement, the performance of these designs is only 50% of the native system with no persistence support, due to the logging or copy-on-write mechanisms used to update the persistent memory. A challenge in this approach is therefore how to efficiently enable atomic, consistent, and durable updates to ensure data persistence that survives application and/or system failures. We have designed a persistent memory system, called Klin, that can provide performance as close as that of the native system. The Klin design adopts a non-volatile cache and a non-volatile main memory for constructing a multi-versioned durable memory system, enabling atomic updates without logging or copy-on-write. Our evaluation shows that the proposed Kiln mechanism can achieve up to 2X of performance improvement to NVRAM-based persistent memory employing write-ahead logging. In addition, our design has numerous practical advantages: a simple and intuitive abstract interface, microarchitecture-level optimizations, fast recovery from failures, and no redundant writes to slow non-volatile storage media. The work was published in MICRO 2013 and received Best Paper Honorable Mentioned Award.« less

  13. Sam2bam: High-Performance Framework for NGS Data Preprocessing Tools

    PubMed Central

    Cheng, Yinhe; Tzeng, Tzy-Hwa Kathy

    2016-01-01

    This paper introduces a high-throughput software tool framework called sam2bam that enables users to significantly speed up pre-processing for next-generation sequencing data. The sam2bam is especially efficient on single-node multi-core large-memory systems. It can reduce the runtime of data pre-processing in marking duplicate reads on a single node system by 156–186x compared with de facto standard tools. The sam2bam consists of parallel software components that can fully utilize multiple processors, available memory, high-bandwidth storage, and hardware compression accelerators, if available. The sam2bam provides file format conversion between well-known genome file formats, from SAM to BAM, as a basic feature. Additional features such as analyzing, filtering, and converting input data are provided by using plug-in tools, e.g., duplicate marking, which can be attached to sam2bam at runtime. We demonstrated that sam2bam could significantly reduce the runtime of next generation sequencing (NGS) data pre-processing from about two hours to about one minute for a whole-exome data set on a 16-core single-node system using up to 130 GB of memory. The sam2bam could reduce the runtime of NGS data pre-processing from about 20 hours to about nine minutes for a whole-genome sequencing data set on the same system using up to 711 GB of memory. PMID:27861637

  14. Experiments and Analyses of Data Transfers Over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    Dedicated wide-area network connections are increasingly employed in high-performance computing and big data scenarios. One might expect the performance and dynamics of data transfers over such connections to be easy to analyze due to the lack of competing traffic. However, non-linear transport dynamics and end-system complexities (e.g., multi-core hosts and distributed filesystems) can in fact make analysis surprisingly challenging. We present extensive measurements of memory-to-memory and disk-to-disk file transfers over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory-to-memory transfers, profiles of both TCP and UDT throughput as a function of RTT show concavemore » and convex regions; large buffer sizes and more parallel flows lead to wider concave regions, which are highly desirable. TCP and UDT both also display complex throughput dynamics, as indicated by their Poincare maps and Lyapunov exponents. For disk-to-disk transfers, we determine that high throughput can be achieved via a combination of parallel I/O threads, parallel network threads, and direct I/O mode. Our measurements also show that Lustre filesystems can be mounted over long-haul connections using LNet routers, although challenges remain in jointly optimizing file I/O and transport method parameters to achieve peak throughput.« less

  15. The potential of multi-port optical memories in digital computing

    NASA Technical Reports Server (NTRS)

    Alford, C. O.; Gaylord, T. K.

    1975-01-01

    A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.

  16. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory

    PubMed Central

    Murty, Vishnu P.; Tompary, Alexa; Adcock, R. Alison

    2017-01-01

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. SIGNIFICANCE STATEMENT Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. PMID:28100737

  17. The Effect of NUMA Tunings on CPU Performance

    NASA Astrophysics Data System (ADS)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  18. Influence of anxiety on memory performance in temporal lobe epilepsy.

    PubMed

    Brown, Franklin C; Westerveld, Michael; Langfitt, John T; Hamberger, Marla; Hamid, Hamada; Shinnar, Shlomo; Sperling, Michael R; Devinsky, Orrin; Barr, William; Tracy, Joseph; Masur, David; Bazil, Carl W; Spencer, Susan S

    2014-02-01

    This study examined the degree to which anxiety contributed to inconsistent material-specific memory difficulties among 243 patients with temporal lobe epilepsy from the Multisite Epilepsy Study. Visual memory performance on the Rey Complex Figure Test (RCFT) was poorer for those with high versus low levels of anxiety but was not found to be related to the TLE side. The verbal memory score on the California Verbal Learning Test (CVLT) was significantly lower for patients with left-sided TLE than for patients with right-sided TLE with low anxiety levels but equally impaired for those with high anxiety levels. These results suggest that we can place more confidence in the ability of verbal memory tests like the CVLT to lateralize to left-sided TLE for those with low anxiety levels, but that verbal memory will be less likely to produce lateralizing information for those with high anxiety levels. This suggests that more caution is needed when interpreting verbal memory tests for those with high anxiety levels. These results indicated that RCFT performance was significantly affected by anxiety and did not lateralize to either side, regardless of anxiety levels. This study adds to the existing literature which suggests that drawing-based visual memory tests do not lateralize among patients with TLE, regardless of anxiety levels. © 2013.

  19. A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories

    NASA Astrophysics Data System (ADS)

    Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon

    BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.

  20. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  1. Matrix-addressed analog ferroelectric memory

    NASA Astrophysics Data System (ADS)

    Lemons, R. A.; Grogan, J. K.; Thompson, J. S.

    1980-08-01

    A matrix addressed analog memory which uses multiple ferroelectric domain walls to address columns of words, is demonstrated. It is shown that the analog information is stored as a pattern in the metallization on the surface of the crystal, making a read-only memory. The pattern is done photolithographically in a way compatible with the simultaneous fabrication of many devices. Attention is given to the performance results, noting that the advantage of the device is that analog information can be stored with a high density in a single mask step. Finally, it is shown that potential applications are in systems which require repetitive output from a limited vocabulary of spoken words.

  2. A system-level approach for embedded memory robustness

    NASA Astrophysics Data System (ADS)

    Mariani, Riccardo; Boschi, Gabriele

    2005-11-01

    New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.

  3. NMF-mGPU: non-negative matrix factorization on multi-GPU systems.

    PubMed

    Mejía-Roa, Edgardo; Tabas-Madrid, Daniel; Setoain, Javier; García, Carlos; Tirado, Francisco; Pascual-Montano, Alberto

    2015-02-13

    In the last few years, the Non-negative Matrix Factorization ( NMF ) technique has gained a great interest among the Bioinformatics community, since it is able to extract interpretable parts from high-dimensional datasets. However, the computing time required to process large data matrices may become impractical, even for a parallel application running on a multiprocessors cluster. In this paper, we present NMF-mGPU, an efficient and easy-to-use implementation of the NMF algorithm that takes advantage of the high computing performance delivered by Graphics-Processing Units ( GPUs ). Driven by the ever-growing demands from the video-games industry, graphics cards usually provided in PCs and laptops have evolved from simple graphics-drawing platforms into high-performance programmable systems that can be used as coprocessors for linear-algebra operations. However, these devices may have a limited amount of on-board memory, which is not considered by other NMF implementations on GPU. NMF-mGPU is based on CUDA ( Compute Unified Device Architecture ), the NVIDIA's framework for GPU computing. On devices with low memory available, large input matrices are blockwise transferred from the system's main memory to the GPU's memory, and processed accordingly. In addition, NMF-mGPU has been explicitly optimized for the different CUDA architectures. Finally, platforms with multiple GPUs can be synchronized through MPI ( Message Passing Interface ). In a four-GPU system, this implementation is about 120 times faster than a single conventional processor, and more than four times faster than a single GPU device (i.e., a super-linear speedup). Applications of GPUs in Bioinformatics are getting more and more attention due to their outstanding performance when compared to traditional processors. In addition, their relatively low price represents a highly cost-effective alternative to conventional clusters. In life sciences, this results in an excellent opportunity to facilitate the daily work of bioinformaticians that are trying to extract biological meaning out of hundreds of gigabytes of experimental information. NMF-mGPU can be used "out of the box" by researchers with little or no expertise in GPU programming in a variety of platforms, such as PCs, laptops, or high-end GPU clusters. NMF-mGPU is freely available at https://github.com/bioinfo-cnb/bionmf-gpu .

  4. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  5. The association of perceived stress and verbal memory is greater in HIV-infected versus HIV-uninfected women.

    PubMed

    Rubin, Leah H; Cook, Judith A; Weber, Kathleen M; Cohen, Mardge H; Martin, Eileen; Valcour, Victor; Milam, Joel; Anastos, Kathryn; Young, Mary A; Alden, Christine; Gustafson, Deborah R; Maki, Pauline M

    2015-08-01

    In contrast to findings from cohorts comprised primarily of HIV-infected men, verbal memory deficits are the largest cognitive deficit found in HIV-infected women from the Women's Interagency HIV Study (WIHS), and this deficit is not explained by depressive symptoms or substance abuse. HIV-infected women may be at greater risk for verbal memory deficits due to a higher prevalence of cognitive risk factors such as high psychosocial stress and lower socioeconomic status. Here, we investigate the association between perceived stress using the Perceived Stress Scale (PSS-10) and verbal memory performance using the Hopkins Verbal Learning Test (HVLT) in 1009 HIV-infected and 496 at-risk HIV-uninfected WIHS participants. Participants completed a comprehensive neuropsychological test battery which yielded seven cognitive domain scores, including a primary outcome of verbal memory. HIV infection was not associated with a higher prevalence of high perceived stress (i.e., PSS-10 score in the top tertile) but was associated with worse performance on verbal learning (p < 0.01) and memory (p < 0.001), as well as attention (p = 0.02). Regardless of HIV status, high stress was associated with poorer performance in those cognitive domains (p's < 0.05) as well as processing speed (p = 0.01) and executive function (p < 0.01). A significant HIV by stress interaction was found only for the verbal memory domain (p = 0.02); among HIV-infected women only, high stress was associated with lower performance (p's < 0.001). That association was driven by the delayed verbal memory measure in particular. These findings suggest that high levels of perceived stress contribute to the deficits in verbal memory observed in WIHS women.

  6. Virtual data

    NASA Astrophysics Data System (ADS)

    Bjorklund, E.

    1994-12-01

    In the 1970s, when computers were memory limited, operating system designers created the concept of "virtual memory", which gave users the ability to address more memory than physically existed. In the 1990s, many large control systems have the potential of becoming data limited. We propose that many of the principles behind virtual memory systems (working sets, locality, caching and clustering) can also be applied to data-limited systems, creating, in effect, "virtual data systems". At the Los Alamos National Laboratory's Clinton P. Anderson Meson Physics Facility (LAMPF), we have applied these principles to a moderately sized (10 000 data points) data acquisition and control system. To test the principles, we measured the system's performance during tune-up, production, and maintenance periods. In this paper, we present a general discussion of the principles of a virtual data system along with some discussion of our own implementation and the results of our performance measurements.

  7. Stress and binge drinking: A toxic combination for the teenage brain.

    PubMed

    Goldstein, Aaron; Déry, Nicolas; Pilgrim, Malcolm; Ioan, Miruna; Becker, Suzanna

    2016-09-01

    Young adult university students frequently binge on alcohol and have high stress levels. Based on findings in rodents, we predicted that heavy current alcohol use and elevated stress and depression scores would be associated with deficits on high interference memory tasks, while early onset, prolonged binge patterns would lead to broader cognitive deficits on tests of associative encoding and executive functions. We developed the Concentration Memory Task, a novel computerized version of the Concentration card game with a high degree of interference. We found that young adults with elevated stress, depression, and alcohol consumption scores were impaired in the Concentration Memory Task. We also analyzed data from a previous study, and found that higher alcohol consumption scores were associated with impaired performance on another high interference memory task, based on Kirwan and Stark's Mnemonic Similarity Test. On the other hand, adolescent onset of binge drinking predicted poorer performance on broader range of memory tests, including a more systematic test of spatial recognition memory, and an associative learning task. Our results are broadly consistent with findings in rodents that acute alcohol and stress exposure suppress neurogenesis in the adult hippocampus, which in turn impairs performance in high interference memory tasks, while adolescent onset binge drinking causes more extensive brain damage and cognitive deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Massively parallel support for a case-based planning system

    NASA Technical Reports Server (NTRS)

    Kettler, Brian P.; Hendler, James A.; Anderson, William A.

    1993-01-01

    Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.

  9. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  10. Memory function and supportive technology

    PubMed Central

    Charness, Neil; Best, Ryan; Souders, Dustin

    2013-01-01

    Episodic and working memory processes show pronounced age-related decline, with other memory processes such as semantic, procedural, and metamemory less affected. Older adults tend to complain the most about prospective and retrospective memory failures. We introduce a framework for deciding how to mitigate memory decline using augmentation and substitution and discuss techniques that change the user, through mnemonics training, and change the tool or environment, by providing environmental support. We provide examples of low-tech and high-tech memory supports and discuss constraints on the utility of high-tech systems including effectiveness of devices, attitudes toward memory aids, and reliability of systems. PMID:24379752

  11. Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing.

    PubMed

    Gordon-Salant, Sandra; Cole, Stacey Samuels

    2016-01-01

    This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

  12. Depressive Mood and Testosterone Related to Declarative Verbal Memory Decline in Middle-Aged Caregivers of Children with Eating Disorders.

    PubMed

    Romero-Martínez, Ángel; Ruiz-Robledillo, Nicolás; Moya-Albiol, Luis

    2016-03-04

    Caring for children diagnosed with a chronic psychological disorder such as an eating disorder (ED) can be used as a model of chronic stress. This kind of stress has been reported to have deleterious effects on caregivers' cognition, particularly in verbal declarative memory of women caregivers. Moreover, high depressive mood and variations in testosterone (T) levels moderate this cognitive decline. The purpose of this study was to characterize whether caregivers of individuals with EDs (n = 27) show declarative memory impairments compared to non-caregivers caregivers (n = 27), using for this purpose a standardized memory test (Rey's Auditory Verbal Learning Test). Its purpose was also to examine the role of depressive mood and T in memory decline. Results showed that ED caregivers presented high depressive mood, which was associated to worse verbal memory performance, especially in the case of women. In addition, all caregivers showed high T levels. Nonetheless, only in the case of women caregivers did T show a curvilinear relationship with verbal memory performance, meaning that the increases of T were associated to the improvement in verbal memory performance, but only up to a certain point, as after such point T continued to increase and memory performance decreased. Thus, chronic stress due to caregiving was associated to disturbances in mood and T levels, which in turn was associated to verbal memory decline. These findings should be taken into account in the implementation of intervention programs for helping ED caregivers cope with caregiving situations and to prevent the risk of a pronounced verbal memory decline.

  13. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  14. Effects of Working Memory Capacity on Metacognitive Monitoring: A Study of Group Differences Using a Listening Span Test.

    PubMed

    Komori, Mie

    2016-01-01

    Monitoring is an executive function of working memory that serves to update novel information, focusing attention on task-relevant targets, and eliminating task-irrelevant noise. The present research used a verbal working memory task to examine how working memory capacity limits affect monitoring. Participants performed a Japanese listening span test that included maintenance of target words and listening comprehension. On each trial, participants responded to the target word and then immediately estimated confidence in recall performance for that word (metacognitive judgment). The results confirmed significant differences in monitoring accuracy between high and low capacity groups in a multi-task situation. That is, confidence judgments were superior in high vs. low capacity participants in terms of absolute accuracy and discrimination. The present research further investigated how memory load and interference affect underestimation of successful recall. The results indicated that the level of memory load that reduced word recall performance and led to an underconfidence bias varied according to participants' memory capacity. In addition, irrelevant information associated with incorrect true/ false decisions (secondary task) and word recall within the current trial impaired monitoring accuracy in both participant groups. These findings suggest that interference from unsuccessful decisions only influences low, but not high, capacity participants. Therefore, monitoring accuracy, which requires high working memory capacity, improves metacognitive abilities by inhibiting task-irrelevant noise and focusing attention on detecting task-relevant targets or useful retrieval cues, which could improve actual cognitive performance.

  15. High-Performance Flexible Organic Nano-Floating Gate Memory Devices Functionalized with Cobalt Ferrite Nanoparticles.

    PubMed

    Jung, Ji Hyung; Kim, Sunghwan; Kim, Hyeonjung; Park, Jongnam; Oh, Joon Hak

    2015-10-07

    Nano-floating gate memory (NFGM) devices are transistor-type memory devices that use nanostructured materials as charge trap sites. They have recently attracted a great deal of attention due to their excellent performance, capability for multilevel programming, and suitability as platforms for integrated circuits. Herein, novel NFGM devices have been fabricated using semiconducting cobalt ferrite (CoFe2O4) nanoparticles (NPs) as charge trap sites and pentacene as a p-type semiconductor. Monodisperse CoFe2O4 NPs with different diameters have been synthesized by thermal decomposition and embedded in NFGM devices. The particle size effects on the memory performance have been investigated in terms of energy levels and particle-particle interactions. CoFe2O4 NP-based memory devices exhibit a large memory window (≈73.84 V), a high read current on/off ratio (read I(on)/I(off)) of ≈2.98 × 10(3), and excellent data retention. Fast switching behaviors are observed due to the exceptional charge trapping/release capability of CoFe2O4 NPs surrounded by the oleate layer, which acts as an alternative tunneling dielectric layer and simplifies the device fabrication process. Furthermore, the NFGM devices show excellent thermal stability, and flexible memory devices fabricated on plastic substrates exhibit remarkable mechanical and electrical stability. This study demonstrates a viable means of fabricating highly flexible, high-performance organic memory devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. BLACKCOMB2: Hardware-software co-design for non-volatile memory in exascale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudge, Trevor

    This work was part of a larger project, Blackcomb2, centered at Oak Ridge National Labs (Jeff Vetter PI) to investigate the opportunities for replacing or supplementing DRAM main memory with nonvolatile memory (NVmemory) in Exascale memory systems. The goal was to reduce the energy consumed by in future supercomputer memory systems and to improve their resiliency. Building on the accomplishments of the original Blackcomb Project, funded in 2010, the goal for Blackcomb2 was to identify, evaluate, and optimize the most promising emerging memory technologies, architecture hardware and software technologies, which are essential to provide the necessary memory capacity, performance, resilience,more » and energy efficiency in Exascale systems. Capacity and energy are the key drivers.« less

  17. Memory Transformation Enhances Reinforcement Learning in Dynamic Environments.

    PubMed

    Santoro, Adam; Frankland, Paul W; Richards, Blake A

    2016-11-30

    Over the course of systems consolidation, there is a switch from a reliance on detailed episodic memories to generalized schematic memories. This switch is sometimes referred to as "memory transformation." Here we demonstrate a previously unappreciated benefit of memory transformation, namely, its ability to enhance reinforcement learning in a dynamic environment. We developed a neural network that is trained to find rewards in a foraging task where reward locations are continuously changing. The network can use memories for specific locations (episodic memories) and statistical patterns of locations (schematic memories) to guide its search. We find that switching from an episodic to a schematic strategy over time leads to enhanced performance due to the tendency for the reward location to be highly correlated with itself in the short-term, but regress to a stable distribution in the long-term. We also show that the statistics of the environment determine the optimal utilization of both types of memory. Our work recasts the theoretical question of why memory transformation occurs, shifting the focus from the avoidance of memory interference toward the enhancement of reinforcement learning across multiple timescales. As time passes, memories transform from a highly detailed state to a more gist-like state, in a process called "memory transformation." Theories of memory transformation speak to its advantages in terms of reducing memory interference, increasing memory robustness, and building models of the environment. However, the role of memory transformation from the perspective of an agent that continuously acts and receives reward in its environment is not well explored. In this work, we demonstrate a view of memory transformation that defines it as a way of optimizing behavior across multiple timescales. Copyright © 2016 the authors 0270-6474/16/3612228-15$15.00/0.

  18. Recognition without Awareness: An Elusive Phenomenon

    ERIC Educational Resources Information Center

    Jeneson, Annette; Kirwan, C. Brock; Squire, Larry R.

    2010-01-01

    Two recent studies described conditions under which recognition memory performance appeared to be driven by nondeclarative memory. Specifically, participants successfully discriminated old images from highly similar new images even when no conscious memory for the images could be retrieved. Paradoxically, recognition performance was better when…

  19. SKIRT: Hybrid parallelization of radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.

    2017-07-01

    We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.

  20. Interplay between affect and arousal in recognition memory.

    PubMed

    Greene, Ciara M; Bahri, Pooja; Soto, David

    2010-07-23

    Emotional states linked to arousal and mood are known to affect the efficiency of cognitive performance. However, the extent to which memory processes may be affected by arousal, mood or their interaction is poorly understood. Following a study phase of abstract shapes, we altered the emotional state of participants by means of exposure to music that varied in both mood and arousal dimensions, leading to four different emotional states: (i) positive mood-high arousal; (ii) positive mood-low arousal; (iii) negative mood-high arousal; (iv) negative mood-low arousal. Following the emotional induction, participants performed a memory recognition test. Critically, there was an interaction between mood and arousal on recognition performance. Memory was enhanced in the positive mood-high arousal and in the negative mood-low arousal states, relative to the other emotional conditions. Neither mood nor arousal alone but their interaction appears most critical to understanding the emotional enhancement of memory.

  1. Reading Comprehension and Working Memory in Learning-Disabled Readers: Is the Phonological Loop More Important Than the Executive System?

    ERIC Educational Resources Information Center

    Swanson, H. Lee

    1999-01-01

    Investigated the contribution of two working-memory systems (the articulatory loop and the central executive) to the performance differences between learning-disabled and skilled readers. Found that, compared to skilled readers, learning-disabled readers experienced constraints in the articulatory and long-term memory system, and suffered…

  2. Multiple Systems of Spatial Memory: Evidence from Described Scenes

    ERIC Educational Resources Information Center

    Avraamides, Marios N.; Kelly, Jonathan W.

    2010-01-01

    Recent models in spatial cognition posit that distinct memory systems are responsible for maintaining transient and enduring spatial relations. The authors used perspective-taking performance to assess the presence of these enduring and transient spatial memories for locations encoded through verbal descriptions. Across 3 experiments, spatial…

  3. Mental Imagery and Visual Working Memory

    PubMed Central

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024

  4. Mental imagery and visual working memory.

    PubMed

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  5. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, T.; Atac, R.; Cook, A.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less

  6. Worrying Thoughts Limit Working Memory Capacity in Math Anxiety

    PubMed Central

    Shi, Zhan; Liu, Peiru

    2016-01-01

    Sixty-one high-math-anxious persons and sixty-one low-math-anxious persons completed a modified working memory capacity task, designed to measure working memory capacity under a dysfunctional math-related context and working memory capacity under a valence-neutral context. Participants were required to perform simple tasks with emotionally benign material (i.e., lists of letters) over short intervals while simultaneously reading and making judgments about sentences describing dysfunctional math-related thoughts or sentences describing emotionally-neutral facts about the world. Working memory capacity for letters under the dysfunctional math-related context, relative to working memory capacity performance under the valence-neutral context, was poorer overall in the high-math-anxious group compared with the low-math-anxious group. The findings show a particular difficulty employing working memory in math-related contexts in high-math-anxious participants. Theories that can provide reasonable interpretations for these findings and interventions that can reduce anxiety-induced worrying intrusive thoughts or improve working memory capacity for math anxiety are discussed. PMID:27788235

  7. Worrying Thoughts Limit Working Memory Capacity in Math Anxiety.

    PubMed

    Shi, Zhan; Liu, Peiru

    2016-01-01

    Sixty-one high-math-anxious persons and sixty-one low-math-anxious persons completed a modified working memory capacity task, designed to measure working memory capacity under a dysfunctional math-related context and working memory capacity under a valence-neutral context. Participants were required to perform simple tasks with emotionally benign material (i.e., lists of letters) over short intervals while simultaneously reading and making judgments about sentences describing dysfunctional math-related thoughts or sentences describing emotionally-neutral facts about the world. Working memory capacity for letters under the dysfunctional math-related context, relative to working memory capacity performance under the valence-neutral context, was poorer overall in the high-math-anxious group compared with the low-math-anxious group. The findings show a particular difficulty employing working memory in math-related contexts in high-math-anxious participants. Theories that can provide reasonable interpretations for these findings and interventions that can reduce anxiety-induced worrying intrusive thoughts or improve working memory capacity for math anxiety are discussed.

  8. NAS Parallel Benchmark. Results 11-96: Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.

  9. Do subjective memory complaints predict senile Alzheimer dementia?

    PubMed

    Jungwirth, Susanne; Zehetmayer, Sonja; Weissgram, Silvia; Weber, Germain; Tragl, Karl Heinz; Fischer, Peter

    2008-01-01

    Many elderly complain about their memory and undergo dementia screening by the Mini-Mental State Examination (MMSE). While objective memory impairment always precedes Alzheimer dementia (AD) it is unclear whether subjective memory complaints are predicting AD. We tried to answer this question in a prospective cohort study. The 75-years old non-demented inhabitants of Vienna-Transdanube were investigated for conversion to AD after 30 months. The predictive value of subjective memory complaints was analysed in two groups: subjects with high MMSE-score (28-30) and subjects with low MMSE-score (23-27). Only in subjects with high MMSE univariate analyses showed an association between subjective memory complaints and incident AD. In both groups the verbal memory test was the main predictor of AD in multivariate analyses. We suggest to perform memory testing in subjects complaining about memory irrespective of their performance in a screening procedure like the MMSE.

  10. The increase in medial prefrontal glutamate/glutamine concentration during memory encoding is associated with better memory performance and stronger functional connectivity in the human medial prefrontal–thalamus–hippocampus network

    PubMed Central

    Hong, Donghyun; Rohani Rankouhi, Seyedmorteza; Wiltfang, Jens; Fernández, Guillén; Norris, David G.; Tendolkar, Indira

    2018-01-01

    Abstract The classical model of the declarative memory system describes the hippocampus and its interactions with representational brain areas in posterior neocortex as being essential for the formation of long‐term episodic memories. However, new evidence suggests an extension of this classical model by assigning the medial prefrontal cortex (mPFC) a specific, yet not fully defined role in episodic memory. In this study, we utilized 1H magnetic resonance spectroscopy (MRS) and psychophysiological interaction (PPI) analysis to lend further support for the idea of a mnemonic role of the mPFC in humans. By using MRS, we measured mPFC γ‐aminobutyric acid (GABA) and glutamate/glutamine (GLx) concentrations before and after volunteers memorized face–name association. We demonstrate that mPFC GLx but not GABA levels increased during the memory task, which appeared to be related to memory performance. Regarding functional connectivity, we used the subsequent memory paradigm and found that the GLx increase was associated with stronger mPFC connectivity to thalamus and hippocampus for associations subsequently recognized with high confidence as opposed to subsequently recognized with low confidence/forgotten. Taken together, we provide new evidence for an mPFC involvement in episodic memory by showing a memory‐related increase in mPFC excitatory neurotransmitter levels that was associated with better memory and stronger memory‐related functional connectivity in a medial prefrontal–thalamus–hippocampus network. PMID:29488277

  11. The increase in medial prefrontal glutamate/glutamine concentration during memory encoding is associated with better memory performance and stronger functional connectivity in the human medial prefrontal-thalamus-hippocampus network.

    PubMed

    Thielen, Jan-Willem; Hong, Donghyun; Rohani Rankouhi, Seyedmorteza; Wiltfang, Jens; Fernández, Guillén; Norris, David G; Tendolkar, Indira

    2018-06-01

    The classical model of the declarative memory system describes the hippocampus and its interactions with representational brain areas in posterior neocortex as being essential for the formation of long-term episodic memories. However, new evidence suggests an extension of this classical model by assigning the medial prefrontal cortex (mPFC) a specific, yet not fully defined role in episodic memory. In this study, we utilized 1H magnetic resonance spectroscopy (MRS) and psychophysiological interaction (PPI) analysis to lend further support for the idea of a mnemonic role of the mPFC in humans. By using MRS, we measured mPFC γ-aminobutyric acid (GABA) and glutamate/glutamine (GLx) concentrations before and after volunteers memorized face-name association. We demonstrate that mPFC GLx but not GABA levels increased during the memory task, which appeared to be related to memory performance. Regarding functional connectivity, we used the subsequent memory paradigm and found that the GLx increase was associated with stronger mPFC connectivity to thalamus and hippocampus for associations subsequently recognized with high confidence as opposed to subsequently recognized with low confidence/forgotten. Taken together, we provide new evidence for an mPFC involvement in episodic memory by showing a memory-related increase in mPFC excitatory neurotransmitter levels that was associated with better memory and stronger memory-related functional connectivity in a medial prefrontal-thalamus-hippocampus network. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  12. Design, Implementation, and Evaluation of a Virtual Shared Memory System in a Multi-Transputer Network.

    DTIC Science & Technology

    1987-12-01

    Synchronization and Data Passing Mechanism ........ 50 4. System Shut Down .................................................................. 51 5...high performance, fault tolerance, and extensibility. These features are attained by synchronizing and coordinating the dis- tributed multicomputer... synchronizing all processors in the network. In a multitransputer network, processes that communicate with each other do so synchronously . This makes

  13. Overlap in the functional neural systems involved in semantic and episodic memory retrieval.

    PubMed

    Rajah, M N; McIntosh, A R

    2005-03-01

    Neuroimaging and neuropsychological data suggest that episodic and semantic memory may be mediated by distinct neural systems. However, an alternative perspective is that episodic and semantic memory represent different modes of processing within a single declarative memory system. To examine whether the multiple or the unitary system view better represents the data we conducted a network analysis using multivariate partial least squares (PLS ) activation analysis followed by covariance structural equation modeling (SEM) of positron emission tomography data obtained while healthy adults performed episodic and semantic verbal retrieval tasks. It is argued that if performance of episodic and semantic retrieval tasks are mediated by different memory systems, then there should differences in both regional activations and interregional correlations related to each type of retrieval task, respectively. The PLS results identified brain regions that were differentially active during episodic retrieval versus semantic retrieval. Regions that showed maximal differences in regional activity between episodic retrieval tasks were used to construct separate functional models for episodic and semantic retrieval. Omnibus tests of these functional models failed to find a significant difference across tasks for both functional models. The pattern of path coefficients for the episodic retrieval model were not different across tasks, nor were the path coefficients for the semantic retrieval model. The SEM results suggest that the same memory network/system was engaged across tasks, given the similarities in path coefficients. Therefore, activation differences between episodic and semantic retrieval may ref lect variation along a continuum of processing during task performance within the context of a single memory system.

  14. Task demands moderate stereotype threat effects on memory performance.

    PubMed

    Hess, Thomas M; Emery, Lisa; Queen, Tara L

    2009-06-01

    Previous research has demonstrated that older adults' memory performance is adversely affected by the explicit activation of negative stereotypes about aging. In this study, we examined the impact of stereotype threat on recognition memory, with specific interest in (a) the generalizability of previously observed effects, (b) the subjective experience of memory, and (c) the moderating effects of task demands. Older participants subjected to threat performed worse than did those in a nonthreat condition but only when performance constraints were high (i.e., memory decisions had to be made within a limited time frame). This effect was reflected in the subjective experience of memory, with participants in this condition having a lower ratio of "remember" to "know" responses. The absence of threat effects when constraints were minimal provides important boundary information regarding stereotype influences on memory performance.

  15. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Astrophysics Data System (ADS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-09-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  16. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  17. New data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamii, A.; Sakaguchi, H.; Takeda, H.

    1996-10-01

    This paper describes a new data acquisition system for the focal plane polarimeter of the Grand Raiden spectrometer at the Research Center for Nuclear Physics (RCNP) in Osaka, Japan. Data are acquired by a Creative Electronic Systems (CES) Starburst, which is a CAMAC auxiliary crate controller equipped with a Digital Equipment Corporation (DEC) J11 microprocessor., The data on the Starburst are transferred to a VME single-board computer. A VME reflective memory module broadcasts the data to other systems through a fiber-optic link. A data transfer rate of 2.0 Mbytes/s between VME modules has been achieved by reflective memories. This ratemore » includes the overhead of buffer management. The overall transfer rate, however, is limited by the performance of the Starburst to about 160 Kbytes/s at maximum. In order to further improve the system performance, the authors developed a new readout module called the Rapid Data Transfer Module (RDTM). RDTM`s transfer data from LeCroy PCOS III`s or 4298`s, and FERA/FERET`s directly to CES 8170 High Speed Memories (HSM) in VME crates. The data transfer rate of the RDTM from PCOS III`s to the HSM is about 4 Mbytes/s.« less

  18. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    PubMed

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  19. Performance Study of the First 2D Prototype of Vertically Integrated Pattern Recognition Associative Memory (VIPRAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Gregory; Hoff, James; Jindariani, Sergo

    Extremely fast pattern recognition capabilities are necessary to find and fit billions of tracks at the hardware trigger level produced every second anticipated at high luminosity LHC (HL-LHC) running conditions. Associative Memory (AM) based approaches for fast pattern recognition have been proposed as a potential solution to the tracking trigger. However, at the HL-LHC, there is much less time available and speed performance must be improved over previous systems while maintaining a comparable number of patterns. The Vertically Integrated Pattern Recognition Associative Memory (VIPRAM) Project aims to achieve the target pattern density and performance goal using 3DIC technology. The firstmore » step taken in the VIPRAM work was the development of a 2D prototype (protoVIPRAM00) in which the associative memory building blocks were designed to be compatible with the 3D integration. In this paper, we present the results from extensive performance studies of the protoVIPRAM00 chip in both realistic HL-LHC and extreme conditions. Results indicate that the chip operates at the design frequency of 100 MHz with perfect correctness in realistic conditions and conclude that the building blocks are ready for 3D stacking. We also present performance boundary characterization of the chip under extreme conditions.« less

  20. MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.

    PubMed

    Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd

    2018-07-01

    Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.

  1. Strategy use fully mediates the relationship between working memory capacity and performance on Raven's matrices.

    PubMed

    Gonthier, Corentin; Thomassin, Noémylle

    2015-10-01

    Working memory capacity consistently correlates with fluid intelligence. It has been suggested that this relationship is partly attributable to strategy use: Participants with high working memory capacity would use more effective strategies, in turn leading to higher performance on fluid intelligence tasks. However, this idea has never been directly investigated. In 2 experiments, we tested this hypothesis by directly manipulating strategy use in a combined experimental-correlational approach (Experiment 1; N = 250) and by measuring strategy use with a self-report questionnaire (Experiment 2; N = 93). Inducing all participants to use an effective strategy in Raven's matrices decreased the correlation between working memory capacity and performance; the strategy use measure fully mediated the relationship between working memory capacity and performance on the matrices task. These findings indicate that individual differences in strategic behavior drive the predictive utility of working memory. We interpret the results within a theoretical framework integrating the multiple mediators of the relationship between working memory capacity and high-level cognition. (c) 2015 APA, all rights reserved).

  2. Accounting for Change in Declarative Memory: A Cognitive Neuroscience Perspective

    ERIC Educational Resources Information Center

    Richmond, Jenny; Nelson, Charles A.

    2007-01-01

    The medial temporal lobe memory system matures relatively early and supports rudimentary declarative memory in young infants. There is considerable development, however, in the memory processes that underlie declarative memory performance during infancy. Here we consider age-related changes in encoding, retention, and retrieval in the context of…

  3. Spontaneous ripples in the hippocampus correlate with epileptogenicity and not memory function in patients with refractory epilepsy.

    PubMed

    Jacobs, Julia; Banks, Sarah; Zelmann, Rina; Zijlmans, Maeike; Jones-Gotman, Marilyn; Gotman, Jean

    2016-09-01

    High-frequency oscillations (HFOs, 80-500Hz) are newly-described EEG markers of epileptogenicity. The proportion of physiological and pathological HFOs is unclear, as frequency analysis is insufficient for separating the two types of events. For instance, ripples (80-250Hz) also occur physiologically during memory consolidation processes in medial temporal lobe structures. We investigated the correlation between HFO rates and memory performance. Patients investigated with bilateral medial temporal electrodes and an intellectual capacity allowing for memory testing were included. High-frequency oscillations were visually marked, and rates of HFOs were calculated for each channel during slow-wave sleep. Patients underwent three verbal and three nonverbal memory tests. They were grouped into severe impairment, some impairment, mostly intact, or intact for verbal and nonverbal memory. We calculated a Pearson correlation between HFO rates in the hippocampi and the memory category and compared HFO rates in each hippocampus with the corresponding (verbal - left, nonverbal - right) memory result using Wilcoxon rank-sum test. Twenty patients were included; ten had bilateral, five had unilateral, and five had no memory impairment. Unilateral memory impairment was verbal in one patient and nonverbal in four. There was no correlation between HFO rates and memory performance in seizure onset areas. There was, however, a significant negative correlation between the overall memory performance and ripple rates (r=-0.50, p=0.03) outside the seizure onset zone. Our results suggest that the majority of spontaneous hippocampal ripples, as defined in the present study, may reflect pathological activity, taking into account the association with memory impairment. The absence of negative correlation between memory performance and HFO rates in seizure onset areas could be explained by HFO rates in the SOZ being generally so high that differences between areas with remaining and impaired memory function cannot be seen. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Digital MOS integrated circuits

    NASA Astrophysics Data System (ADS)

    Elmasry, M. I.

    MOS in digital circuit design is considered along with aspects of digital VLSI, taking into account a comparison of MOSFET logic circuits, 1-micrometer MOSFET VLSI technology, a generalized guide for MOSFET miniaturization, processing technologies, novel circuit structures for VLSI, and questions of circuit and system design for VLSI. MOS memory cells and circuits are discussed, giving attention to a survey of high-density dynamic RAM cell concepts, one-device cells for dynamic random-access memories, variable resistance polysilicon for high density CMOS Ram, high performance MOS EPROMs using a stacked-gate cell, and the optimization of the latching pulse for dynamic flip-flop sensors. Programmable logic arrays are considered along with digital signal processors, microprocessors, static RAMs, and dynamic RAMs.

  5. The Development of Attention Systems and Working Memory in Infancy

    PubMed Central

    Reynolds, Greg D.; Romano, Alexandra C.

    2016-01-01

    In this article, we review research and theory on the development of attention and working memory in infancy using a developmental cognitive neuroscience framework. We begin with a review of studies examining the influence of attention on neural and behavioral correlates of an earlier developing and closely related form of memory (i.e., recognition memory). Findings from studies measuring attention utilizing looking measures, heart rate, and event-related potentials (ERPs) indicate significant developmental change in sustained and selective attention across the infancy period. For example, infants show gains in the magnitude of the attention related response and spend a greater proportion of time engaged in attention with increasing age (Richards and Turner, 2001). Throughout infancy, attention has a significant impact on infant performance on a variety of tasks tapping into recognition memory; however, this approach to examining the influence of infant attention on memory performance has yet to be utilized in research on working memory. In the second half of the article, we review research on working memory in infancy focusing on studies that provide insight into the developmental timing of significant gains in working memory as well as research and theory related to neural systems potentially involved in working memory in early development. We also examine issues related to measuring and distinguishing between working memory and recognition memory in infancy. To conclude, we discuss relations between the development of attention systems and working memory. PMID:26973473

  6. The Development of Attention Systems and Working Memory in Infancy.

    PubMed

    Reynolds, Greg D; Romano, Alexandra C

    2016-01-01

    In this article, we review research and theory on the development of attention and working memory in infancy using a developmental cognitive neuroscience framework. We begin with a review of studies examining the influence of attention on neural and behavioral correlates of an earlier developing and closely related form of memory (i.e., recognition memory). Findings from studies measuring attention utilizing looking measures, heart rate, and event-related potentials (ERPs) indicate significant developmental change in sustained and selective attention across the infancy period. For example, infants show gains in the magnitude of the attention related response and spend a greater proportion of time engaged in attention with increasing age (Richards and Turner, 2001). Throughout infancy, attention has a significant impact on infant performance on a variety of tasks tapping into recognition memory; however, this approach to examining the influence of infant attention on memory performance has yet to be utilized in research on working memory. In the second half of the article, we review research on working memory in infancy focusing on studies that provide insight into the developmental timing of significant gains in working memory as well as research and theory related to neural systems potentially involved in working memory in early development. We also examine issues related to measuring and distinguishing between working memory and recognition memory in infancy. To conclude, we discuss relations between the development of attention systems and working memory.

  7. Two Unipolar Terminal-Attractor-Based Associative Memories

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Wu, Chwan-Hwa

    1995-01-01

    Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).

  8. Set-relevance determines the impact of distractors on episodic memory retrieval.

    PubMed

    Kwok, Sze Chai; Shallice, Tim; Macaluso, Emiliano

    2014-09-01

    We investigated the interplay between stimulus-driven attention and memory retrieval with a novel interference paradigm that engaged both systems concurrently on each trial. Participants encoded a 45-min movie on Day 1 and, on Day 2, performed a temporal order judgment task during fMRI. Each retrieval trial comprised three images presented sequentially, and the task required participants to judge the temporal order of the first and the last images ("memory probes") while ignoring the second image, which was task irrelevant ("attention distractor"). We manipulated the content relatedness and the temporal proximity between the distractor and the memory probes, as well as the temporal distance between two probes. Behaviorally, short temporal distances between the probes led to reduced retrieval performance. Distractors that at encoding were temporally close to the first probe image reduced these costs, specifically when the distractor was content unrelated to the memory probes. The imaging results associated the distractor probe temporal proximity with activation of the right ventral attention network. By contrast, the precuneus was activated for high-content relatedness between distractors and probes and in trials including a short distance between the two memory probes. The engagement of the right ventral attention network by specific types of distractors suggests a link between stimulus-driven attention control and episodic memory retrieval, whereas the activation pattern of the precuneus implicates this region in memory search within knowledge/content-based hierarchies.

  9. MOBS - A modular on-board switching system

    NASA Astrophysics Data System (ADS)

    Berner, W.; Grassmann, W.; Piontek, M.

    The authors describe a multibeam satellite system that is designed for business services and for communications at a high bit rate. The repeater is regenerative with a modular onboard switching system. It acts not only as baseband switch but also as the central node of the network, performing network control and protocol evaluation. The hardware is based on a modular bus/memory architecture with associated processors.

  10. Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sczyrba, Alex; Pratap, Abhishek; Canon, Shane

    2011-03-22

    Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less

  11. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    PubMed Central

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this division based on awareness is a useful taxonomy for memory tasks, the authors review the evidence that the unconscious character of implicit memory does not necessitate that it be treated as a separate system of human memory. They also argue that some implicit and explicit memory tasks share the same memory representations and that the important distinction is whether the task (implicit or explicit) requires the formation of a new association. The authors review and critique dissociations from the behavioral, amnesia, and neuroimaging literatures that have been advanced in support of separate explicit and implicit memory systems by highlighting contradictory evidence and by illustrating how the data can be accounted for using a simple computational memory model that assumes the same memory representation for those disparate tasks. PMID:19210052

  12. The Effects of Working Memory on Brain-Computer Interface Performance

    PubMed Central

    Sprague, Samantha A.; McBee, Matthew; Sellers, Eric W.

    2015-01-01

    Objective The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Methods Participants took part in two separate sessions. The first session consisted of three computerized tasks. The LSWM was used to measure working memory, the TPVT was used to measure general intelligence, and the DCCS was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. Results The results indicate that both working memory and general intelligence are significant predictors of BCI performance. Conclusions This suggests that working memory training could be used to improve performance on a BCI task. Significance Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. PMID:26620822

  13. Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.

  14. Ferroelectric memory evaluation and development system

    NASA Astrophysics Data System (ADS)

    Bondurant, David W.

    Attention is given to the Ramtron FEDS-1, an IBM PC/AT compatible single-board 16-b microcomputer with 8-kbyte program/data memory implemented with nonvolatile ferroelectric dynamic RAM. This is the first demonstration of a new type of solid state nonvolatile read/write memory, the ferroelectric RAM (FRAM). It is suggested that this memory technology will have a significant impact on avionics system performance and reliability.

  15. Ultranarrow Optical Inhomogeneous Linewidth in a Stoichiometric Rare-Earth Crystal.

    PubMed

    Ahlefeldt, R L; Hush, M R; Sellars, M J

    2016-12-16

    We obtain a low optical inhomogeneous linewidth of 25 MHz in the stoichiometric rare-earth crystal EuCl_{3}·6H_{2}O by isotopically purifying the crystal in ^{35}Cl. With this linewidth, an important limit for stoichiometric rare-earth crystals is surpassed: the hyperfine structure of ^{153}Eu is spectrally resolved, allowing the whole population of ^{153}Eu^{3+} ions to be prepared in the same hyperfine state using hole-burning techniques. This material also has a very high optical density, and can have long coherence times when deuterated. This combination of properties offers new prospects for quantum information applications. We consider two of these: quantum memories and quantum many-body studies. We detail the improvements in the performance of current memory protocols possible in these high optical depth crystals, and describe how certain memory protocols, such as off-resonant Raman memories, can be implemented for the first time in a solid-state system. We explain how the strong excitation-induced interactions observed in this material resemble those seen in Rydberg systems, and describe how these interactions can lead to quantum many-body states that could be observed using standard optical spectroscopy techniques.

  16. Memory recall in arousing situations - an emotional von Restorff effect?

    PubMed

    Wiswede, Daniel; Rüsseler, Jascha; Hasselbach, Simone; Münte, Thomas F

    2006-07-24

    Previous research has demonstrated a relationship between memory recall and P300 amplitude in list learning tasks, but the variables mediating this P300-recall relationship are not well understood. In the present study, subjects were required to recall items from lists consisting of 12 words, which were presented in front of pictures taken from the IAPS collection. One word per list is made distinct either by font color or by a highly arousing background IAPS picture. This isolation procedure was first used by von Restorff. Brain potentials were recorded during list presentation. Recall performance was enhanced for color but not for emotional isolates. Event-related brain potentials (ERP) showed a more positive P300-component for recalled non-isolated words and color-isolated words, compared to the respective non-remembered words, but not for words isolated by arousing background. Our findings indicate that it is crucial to take emotional mediator variables into account, when using the P300 to predict later recall. Highly arousing environments might force the cognitive system to interrupt rehearsal processes in working memory, which might benefit transfer into other, more stable memory systems. The impact of attention-capturing properties of arousing background stimuli is also discussed.

  17. Toxoplasma gondii impairs memory in infected seniors.

    PubMed

    Gajewski, Patrick D; Falkenstein, Michael; Hengstler, Jan G; Golka, Klaus

    2014-02-01

    Almost 30% of humans present a Toxoplasma gondii positive antibody status and its prevalence increases with age. The central nervous system is the main target. However, little is known about the influence of asymptomatic i.e. latent Toxoplasmosis on cognitive functions in humans. To investigate neurocognitive dysfunctions in asymptomatic older adults with T. gondii positive antibody status a double-blinded neuropsychological study was conducted. The participants were classified from a population-based sample (N=131) of healthy participants with an age of 65 years and older into two groups with 42 individuals each: Toxoplasmosis positive (T-pos; IgG>50 IU/ml) and Toxoplasmosis negative (T-neg; IgG=0 IU/ml). The outcome measures were a computer-based working-memory test (2-back) and several standardized psychometric tests of memory and executive cognitive functions. T-pos seniors showed an impairment of different aspects of memory. The rate of correctly detected target symbols in a 2-back task was decreased by nearly 9% (P=0.020), corresponding to a performance reduction of about 35% in working memory relative to the T-neg group. Moreover, T-pos seniors had a lower performance in a verbal memory test, both regarding immediate recall (10% reduction; P=0.022), delayed recognition (6%; P=0.037) and recall from long-term memory assessed by the word fluency tests (12%; P=0.029). In contrast, executive functions were not affected. The effects remained mostly unchanged after controlling for medication. The impairment of memory functions in T-pos seniors was accompanied by a decreased self-reported quality of life. Because of the high prevalence of asymptomatic Toxoplasmosis and an increasing population of older adults this finding is of high relevance for public health. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Investigation of Hafnium oxide/Copper resistive memory for advanced encryption applications

    NASA Astrophysics Data System (ADS)

    Briggs, Benjamin D.

    The Advanced Encryption Standard (AES) is a widely used encryption algorithm to protect data and communications in today's digital age. Modern AES CMOS implementations require large amounts of dedicated logic and must be tuned for either performance or power consumption. A high throughput, low power, and low die area AES implementation is required in the growing mobile sector. An emerging non-volatile memory device known as resistive memory (ReRAM) is a simple metal-insulator-metal capacitor device structure with the ability to switch between two stable resistance states. Currently, ReRAM is targeted as a non-volatile memory replacement technology to eventually replace flash. Its advantages over flash include ease of fabrication, speed, and lower power consumption. In addition to memory, ReRAM can also be used in advanced logic implementations given its purely resistive behavior. The combination of a new non-volatile memory element ReRAM along with high performance, low power CMOS opens new avenues for logic implementations. This dissertation will cover the design and process implementation of a ReRAM-CMOS hybrid circuit, built using IBM's 10LPe process, for the improvement of hardware AES implementations. Further the device characteristics of ReRAM, specifically the HfO2/Cu memory system, and mechanisms for operation are not fully correlated. Of particular interest to this work is the role of material properties such as the stoichiometry, crystallinity, and doping of the HfO2 layer and their effect on the switching characteristics of resistive memory. Material properties were varied by a combination of atomic layer deposition and reactive sputtering of the HfO2 layer. Several studies will be discussed on how the above mentioned material properties influence switching parameters, and change the underlying physics of device operation.

  19. From Brown-Peterson to continual distractor via operation span: A SIMPLE account of complex span.

    PubMed

    Neath, Ian; VanWormer, Lisa A; Bireta, Tamra J; Surprenant, Aimée M

    2014-09-01

    Three memory tasks-Brown-Peterson, complex span, and continual distractor-all alternate presentation of a to-be-remembered item and a distractor activity, but each task is associated with a different memory system, short-term memory, working memory, and long-term memory, respectively. SIMPLE, a relative local distinctiveness model, has previously been fit to data from both the Brown-Peterson and continual distractor tasks; here we use the same version of the model to fit data from a complex span task. Despite the many differences between the tasks, including unpredictable list length, SIMPLE fit the data well. Because SIMPLE posits a single memory system, these results constitute yet another demonstration that performance on tasks originally thought to tap different memory systems can be explained without invoking multiple memory systems.

  20. An Ideal Observer Analysis of Visual Working Memory

    ERIC Educational Resources Information Center

    Sims, Chris R.; Jacobs, Robert A.; Knill, David C.

    2012-01-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around…

  1. Working Memory Load and Reminder Effect on Event-Based Prospective Memory of High- and Low-Achieving Students in Math.

    PubMed

    Chen, Youzhen; Lian, Rong; Yang, Lixian; Liu, Jianrong; Meng, Yingfang

    The effects of working memory (WM) demand and reminders on an event-based prospective memory (PM) task were compared between students with low and high achievement in math. WM load (1- and 2-back tasks) was manipulated as a within-subject factor and reminder (with or without reminder) as a between-subject factor. Results showed that high-achieving students outperformed low-achieving students on all PM and n-back tasks. Use of a reminder improved PM performance and thus reduced prospective interference; the performance of ongoing tasks also improved for all students. Both PM and n-back performances in low WM load were better than in high WM load. High WM load had more influence on low-achieving students than on high-achieving students. Results suggest that low-achieving students in math were weak at PM and influenced more by high WM load. Thus, it is important to train these students to set up an obvious reminder for their PM and improve their WM.

  2. Kmerind: A Flexible Parallel Library for K-mer Indexing of Biological Sequences on Distributed Memory Systems.

    PubMed

    Pan, Tony; Flick, Patrick; Jain, Chirag; Liu, Yongchao; Aluru, Srinivas

    2017-10-09

    Counting and indexing fixed length substrings, or k-mers, in biological sequences is a key step in many bioinformatics tasks including genome alignment and mapping, genome assembly, and error correction. While advances in next generation sequencing technologies have dramatically reduced the cost and improved latency and throughput, few bioinformatics tools can efficiently process the datasets at the current generation rate of 1.8 terabases every 3 days. We present Kmerind, a high performance parallel k-mer indexing library for distributed memory environments. The Kmerind library provides a set of simple and consistent APIs with sequential semantics and parallel implementations that are designed to be flexible and extensible. Kmerind's k-mer counter performs similarly or better than the best existing k-mer counting tools even on shared memory systems. In a distributed memory environment, Kmerind counts k-mers in a 120 GB sequence read dataset in less than 13 seconds on 1024 Xeon CPU cores, and fully indexes their positions in approximately 17 seconds. Querying for 1% of the k-mers in these indices can be completed in 0.23 seconds and 28 seconds, respectively. Kmerind is the first k-mer indexing library for distributed memory environments, and the first extensible library for general k-mer indexing and counting. Kmerind is available at https://github.com/ParBLiSS/kmerind.

  3. The effects of working memory on brain-computer interface performance.

    PubMed

    Sprague, Samantha A; McBee, Matthew T; Sellers, Eric W

    2016-02-01

    The purpose of the present study is to evaluate the relationship between working memory and BCI performance. Participants took part in two separate sessions. The first session consisted of three computerized tasks. The List Sorting Working Memory Task was used to measure working memory, the Picture Vocabulary Test was used to measure general intelligence, and the Dimensional Change Card Sort Test was used to measure executive function, specifically cognitive flexibility. The second session consisted of a P300-based BCI copy-spelling task. The results indicate that both working memory and general intelligence are significant predictors of BCI performance. This suggests that working memory training could be used to improve performance on a BCI task. Working memory training may help to reduce a portion of the individual differences that exist in BCI performance allowing for a wider range of users to successfully operate the BCI system as well as increase the BCI performance of current users. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less

  5. A wide bandwidth CCD buffer memory system

    NASA Technical Reports Server (NTRS)

    Siemens, K.; Wallace, R. W.; Robinson, C. R.

    1978-01-01

    A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.

  6. Consistency across Repeated Eyewitness Interviews: Contrasting Police Detectives’ Beliefs with Actual Eyewitness Performance

    PubMed Central

    Krix, Alana C.; Sauerland, Melanie; Lorei, Clemens; Rispens, Imke

    2015-01-01

    In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses’ credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure. PMID:25695428

  7. Consistency across repeated eyewitness interviews: contrasting police detectives' beliefs with actual eyewitness performance.

    PubMed

    Krix, Alana C; Sauerland, Melanie; Lorei, Clemens; Rispens, Imke

    2015-01-01

    In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses' credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure.

  8. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  9. Association between exposure to work stressors and cognitive performance.

    PubMed

    Vuori, Marko; Akila, Ritva; Kalakoski, Virpi; Pentti, Jaana; Kivimäki, Mika; Vahtera, Jussi; Härmä, Mikko; Puttonen, Sampsa

    2014-04-01

    To examine the association between work stress and cognitive performance. Cognitive performance of a total of 99 women (mean age = 47.3 years) working in hospital wards at either the top or bottom quartiles of job strain was assessed using validated tests that measured learning, short-term memory, and speed of memory retrieval. The high job strain group (n = 43) had lower performance than the low job strain group (n = 56) in learning (P = 0.025), short-term memory (P = 0.027), and speed of memory retrieval (P = 0.003). After controlling for education level, only the difference in speed of memory retrieval remained statistically significant (P = 0.010). The association found between job strain and speed of memory retrieval might be one important factor explaining the effect of stress on work performance.

  10. Impairment on a self-ordered working memory task in patients with early-acquired hippocampal atrophy.

    PubMed

    Geva, Sharon; Cooper, Janine M; Gadian, David G; Mishkin, Mortimer; Vargha-Khadem, Faraneh

    2016-08-01

    One of the features of both adult-onset and developmental forms of amnesia resulting from bilateral medial temporal lobe damage, or even from relatively selective damage to the hippocampus, is the sparing of working memory. Recently, however, a number of studies have reported deficits on working memory tasks in patients with damage to the hippocampus and in macaque monkeys with neonatal hippocampal lesions. These studies suggest that successful performance on working memory tasks with high memory load require the contribution of the hippocampus. Here we compared performance on a working memory task (the Self-ordered Pointing Task), between patients with early onset hippocampal damage and a group of healthy controls. Consistent with the findings in the monkeys with neonatal lesions, we found that the patients were impaired on the task, but only on blocks of trials with intermediate memory load. Importantly, only intermediate to high memory load blocks yielded significant correlations between task performance and hippocampal volume. Additionally, we found no evidence of proactive interference in either group, and no evidence of an effect of time since injury on performance. We discuss the role of the hippocampus and its interactions with the prefrontal cortex in serving working memory. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Acute stress impairs recall after interference in older people, but not in young people.

    PubMed

    Hidalgo, Vanesa; Almela, Mercedes; Villada, Carolina; Salvador, Alicia

    2014-03-01

    Stress has been associated with negative changes observed during the aging process. However, very little research has been carried out on the role of age in acute stress effects on memory. We aimed to explore the role of age and sex in the relationship between hypothalamus-pituitary-adrenal axis (HPA-axis) and sympathetic nervous system (SNS) reactivity to psychosocial stress and short-term declarative memory performance. To do so, sixty-seven participants divided into two age groups (each group with a similar number of men and women) were exposed to the Trier Social Stress Test (TSST) and a control condition in a crossover design. Memory performance was assessed by the Rey Auditory Verbal Learning Test (RAVLT). As expected, worse memory performance was associated with age; but more interestingly, the stressor impaired recall after interference only in the older group. In addition, this effect was negatively correlated with the alpha-amylase over cortisol ratio, which has recently been suggested as a good marker of stress system dysregulation. However, we failed to find sex differences in memory performance. These results show that age moderates stress-induced effects on declarative memory, and they point out the importance of studying both of the physiological systems involved in the stress response together. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. The Nature of Individual Differences in Working Memory Capacity: Active Maintenance in Primary Memory and Controlled Search from Secondary Memory

    ERIC Educational Resources Information Center

    Unsworth, Nash; Engle, Randall W.

    2007-01-01

    Studies examining individual differences in working memory capacity have suggested that individuals with low working memory capacities demonstrate impaired performance on a variety of attention and memory tasks compared with individuals with high working memory capacities. This working memory limitation can be conceived of as arising from 2…

  13. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important computational algorithms employed at NASA Ames require different programming styles on vector machines and cache-based machines, respectively, neither architecture class appeared to be favored by particular algorithms in principle. Practice tells us that the situation is more complicated. This report presents observations and some analysis of performance tuning for cache-based systems. We point out several counterintuitive results that serve as a cautionary reminder that memory accesses are not the only factors that determine performance, and that within the class of cache-based systems, significant differences exist.

  14. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  15. RTSJ Memory Areas and Their Affects on the Performance of a Flight-Like Attitude Control System

    NASA Technical Reports Server (NTRS)

    Niessner, Albert F.; Benowitz, Edward G.

    2003-01-01

    The two most important factors in improving performance in any software system, but especially a real-time, embedded system, are knowing which components are the low performers and knowing what can be done to improve their performance. The word performance with respect to a real-time, embedded system does not necessarily mean fast execution, which is the common definition when discussing non real-time systems. It also includes meeting all of the specified execution dead-lines and executing at the correct time without sacrificing non real-time performance. Using a Java prototype of an existing control system used on Deep Space 1[1], the effects from adding memory areas are measured and evaluated with respect to improving performance.

  16. Long lifetime and high-fidelity quantum memory of photonic polarization qubit by lifting zeeman degeneracy.

    PubMed

    Xu, Zhongxiao; Wu, Yuelong; Tian, Long; Chen, Lirong; Zhang, Zhiying; Yan, Zhihui; Li, Shujing; Wang, Hai; Xie, Changde; Peng, Kunchi

    2013-12-13

    Long-lived and high-fidelity memory for a photonic polarization qubit (PPQ) is crucial for constructing quantum networks. We present a millisecond storage system based on electromagnetically induced transparency, in which a moderate magnetic field is applied on a cold-atom cloud to lift Zeeman degeneracy and, thus, the PPQ states are stored as two magnetic-field-insensitive spin waves. Especially, the influence of magnetic-field-sensitive spin waves on the storage performances is almost totally avoided. The measured average fidelities of the polarization states are 98.6% at 200  μs and 78.4% at 4.5 ms, respectively.

  17. Working memory capacity and retrieval limitations from long-term memory: an examination of differences in accessibility.

    PubMed

    Unsworth, Nash; Spillers, Gregory J; Brewer, Gene A

    2012-01-01

    In two experiments, the locus of individual differences in working memory capacity and long-term memory recall was examined. Participants performed categorical cued and free recall tasks, and individual differences in the dynamics of recall were interpreted in terms of a hierarchical-search framework. The results from this study are in accordance with recent theorizing suggesting a strong relation between working memory capacity and retrieval from long-term memory. Furthermore, the results also indicate that individual differences in categorical recall are partially due to differences in accessibility. In terms of accessibility of target information, two important factors drive the difference between high- and low-working-memory-capacity participants. Low-working-memory-capacity participants fail to utilize appropriate retrieval strategies to access cues, and they also have difficulty resolving cue overload. Thus, when low-working-memory-capacity participants were given specific cues that activated a smaller set of potential targets, their recall performance was the same as that of high-working-memory-capacity participants.

  18. Verbal working memory performance correlates with regional white matter structures in the frontoparietal regions.

    PubMed

    Takeuchi, Hikaru; Taki, Yasuyuki; Sassa, Yuko; Hashizume, Hiroshi; Sekiguchi, Atsushi; Fukushima, Ai; Kawashima, Ryuta

    2011-10-01

    Working memory is the limited capacity storage system involved in the maintenance and manipulation of information over short periods of time. Previous imaging studies have suggested that the frontoparietal regions are activated during working memory tasks; a putative association between the structure of the frontoparietal regions and working memory performance has been suggested based on the analysis of individuals with varying pathologies. This study aimed to identify correlations between white matter and individual differences in verbal working memory performance in normal young subjects. We performed voxel-based morphometry (VBM) analyses using T1-weighted structural images as well as voxel-based analyses of fractional anisotropy (FA) using diffusion tensor imaging. Using the letter span task, we measured verbal working memory performance in normal young adult men and women (mean age, 21.7 years, SD=1.44; 42 men and 13 women). We observed positive correlations between working memory performance and regional white matter volume (rWMV) in the frontoparietal regions. In addition, FA was found to be positively correlated with verbal working memory performance in a white matter region adjacent to the right precuneus. These regions are consistently recruited by working memory. Our findings suggest that, among normal young subjects, verbal working memory performance is associated with various regions that are recruited during working memory tasks, and this association is not limited to specific parts of the working memory network. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory.

    PubMed

    Murty, Vishnu P; Tompary, Alexa; Adcock, R Alison; Davachi, Lila

    2017-01-18

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. Copyright © 2017 the authors 0270-6474/17/370537-09$15.00/0.

  20. Evaluation of the Cedar memory system: Configuration of 16 by 16

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Wijshoff, H.

    1991-01-01

    Some basic results on the performance of the Cedar multiprocessor system are presented. Empirical results on the 16 processor 16 memory bank system configuration, which show the behavior of the Cedar system under different modes of operation are presented.

  1. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    PubMed

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  2. GPU-Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    NASA Astrophysics Data System (ADS)

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  3. The match/mismatch of visuo-spatial cues between acquisition and retrieval contexts influences the expression of response vs. place memory in rats.

    PubMed

    Cassel, Raphaelle; Kelche, Christian; Lecourtier, Lucas; Cassel, Jean-Christophe

    2012-05-01

    Animals can perform goal-directed tasks by using response cues or place cues. The underlying memory systems are occasionally presented as competing. Using the double-H maze test (Pol-Bodetto et al.), we trained rats for response learning and, 24 h later, tested their memory in a 60-s probe trial using a new start place. A modest shift of the start place (translation: 60-cm to the left) provided a high misleading potential, whereas a marked shift (180° rotation; shift to the opposite) provided a low misleading potential. We analyzed each rat's first arm choice (to assess response vs. place memory retrieval) and its subsequent search for the former platform location (to assess the persistence in place memory or the shift from response to place memory). After the translation, response memory-based behavior was found in more than 90% rats (24/26). After the rotation, place memory-based behavior was observed in 50% rats, the others showing response memory or failing. Rats starting to use response cues were nevertheless able to subsequently shift to place ones. A posteriori behavioral analyses showed more and longer stops in rats starting their probe trial on the basis of place (vs. response) cues. These observations qualify the idea of competing memory systems for responses and places and are compatible with that of a cooperation between both systems according to principles of match/mismatch computation (at the start of a probe trial) and of error-driven adjustment (during the ongoing probe trial). Copyright © 2012 Elsevier B.V. All rights reserved.

  4. A study of the relationship between the performance and dependability of a fault-tolerant computer

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated.

  5. Articulatory rehearsal in verbal working memory: a possible neurocognitive endophenotype that differentiates between schizophrenia and schizoaffective disorder.

    PubMed

    Gruber, Oliver; Gruber, Eva; Falkai, Peter

    2006-09-11

    Recent fMRI studies have identified brain systems underlying different components of working memory in healthy individuals. The aim of this study was to compare the functional integrity of these neural networks in terms of behavioural performance in patients with schizophrenia, schizoaffective disorder and healthy controls. In order to detect specific working memory deficits based on dysfunctions of underlying brain circuits we used the same verbal and visuospatial Sternberg item-recognition tasks as in previous neuroimaging studies. Clinical and performance data from matched groups consisting of 14 subjects each were statistically analyzed. Schizophrenic patients exhibited pronounced impairments of both verbal and visuospatial working memory, whereas verbal working memory performance was preserved in schizoaffective patients. The findings provide first evidence that dysfunction of a brain system subserving articulatory rehearsal could represent a biological marker which differentiates between schizophrenia and schizoaffective disorder.

  6. High-performance black phosphorus top-gate ferroelectric transistor for nonvolatile memory applications

    NASA Astrophysics Data System (ADS)

    Lee, Young Tack; Hwang, Do Kyung; Choi, Won Kook

    2016-10-01

    Two-dimensional (2D) van der Waals (vdW) atomic crystals have been extensively studied and significant progress has been made. The newest 2D vdW material, called black phosphorus (BP), has attracted considerable attention due to its unique physical properties, such as its being a singlecomponent material like graphene, and its having a high mobility and direct band gap. Here, we report on a high-performance BP nanosheet based ferroelectric field effect transistor (FeFET) with a poly(vinylidenefluoride-trifluoroethylene) top-gate insulator for a nonvolatile memory application. The BP FeFETs show the highest linear hole mobility of 563 cm2/Vs and a clear memory window of more than 15 V. For more advanced nonvolatile memory circuit applications, two different types of resistive-load and complementary ferroelectric memory inverters were implemented, which showed distinct memory on/off switching characteristics.

  7. Visuospatial working memory in very preterm and term born children--impact of age and performance.

    PubMed

    Mürner-Lavanchy, I; Ritter, B C; Spencer-Smith, M M; Perrig, W J; Schroth, G; Steinlin, M; Everts, R

    2014-07-01

    Working memory is crucial for meeting the challenges of daily life and performing academic tasks, such as reading or arithmetic. Very preterm born children are at risk of low working memory capacity. The aim of this study was to examine the visuospatial working memory network of school-aged preterm children and to determine the effect of age and performance on the neural working memory network. Working memory was assessed in 41 very preterm born children and 36 term born controls (aged 7-12 years) using functional magnetic resonance imaging (fMRI) and neuropsychological assessment. While preterm children and controls showed equal working memory performance, preterm children showed less involvement of the right middle frontal gyrus, but higher fMRI activation in superior frontal regions than controls. The younger and low-performing preterm children presented an atypical working memory network whereas the older high-performing preterm children recruited a working memory network similar to the controls. Results suggest that younger and low-performing preterm children show signs of less neural efficiency in frontal brain areas. With increasing age and performance, compensational mechanisms seem to occur, so that in preterm children, the typical visuospatial working memory network is established by the age of 12 years. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Modulation of competing memory systems by distraction.

    PubMed

    Foerde, Karin; Knowlton, Barbara J; Poldrack, Russell A

    2006-08-01

    Different forms of learning and memory depend on functionally and anatomically separable neural circuits [Squire, L. R. (1992) Psychol. Rev. 99, 195-231]. Declarative memory relies on a medial temporal lobe system, whereas habit learning relies on the striatum [Cohen, N. J. & Eichenbaum, H. (1993) Memory, Amnesia, and the Hippocampal System (MIT Press, Cambridge, MA)]. How these systems are engaged to optimize learning and behavior is not clear. Here, we present results from functional neuroimaging showing that the presence of a demanding secondary task during learning modulates the degree to which subjects solve a problem using either declarative memory or habit learning. Dual-task conditions did not reduce accuracy but reduced the amount of declarative learning about the task. Medial temporal lobe activity was correlated with task performance and declarative knowledge after learning under single-task conditions, whereas performance was correlated with striatal activity after dual-task learning conditions. These results demonstrate a fundamental difference in these memory systems in their sensitivity to concurrent distraction. The results are consistent with the notion that declarative and habit learning compete to mediate task performance, and they suggest that the presence of distraction can bias this competition. These results have implications for learning in multitask situations, suggesting that, even if distraction does not decrease the overall level of learning, it can result in the acquisition of knowledge that can be applied less flexibly in new situations.

  9. Does the presence of priming hinder subsequent recognition or recall performance?

    PubMed

    Stark, Shauna M; Gordon, Barry; Stark, Craig E L

    2008-02-01

    Declarative and non-declarative memories are thought be supported by two distinct memory systems that are often posited not to interact. However, Wagner, Maril, and Schacter (2000a) reported that at the time priming was assessed, greater behavioural and neural priming was associated with lower levels of subsequent recognition memory, demonstrating an interaction between declarative and non-declarative memory. We examined this finding using a similar paradigm, in which participants made the same or different semantic word judgements following a short or long lag and subsequent memory test. We found a similar overall pattern of results, with greater behavioural priming associated with a decrease in recognition and recall performance. However, neither various within-participant nor various between-participant analyses revealed significant correlations between priming and subsequent memory performance. These data suggest that both lag and task have effects on priming and declarative memory performance, but that they are largely independent and occur in parallel.

  10. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  11. Efficient Aho-Corasick String Matching on Emerging Multicore Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Secchi, Simone

    String matching algorithms are critical to several scientific fields. Beside text processing and databases, emerging applications such as DNA protein sequence analysis, data mining, information security software, antivirus, ma- chine learning, all exploit string matching algorithms [3]. All these applica- tions usually process large quantity of textual data, require high performance and/or predictable execution times. Among all the string matching algorithms, one of the most studied, especially for text processing and security applica- tions, is the Aho-Corasick algorithm. 1 2 Book title goes here Aho-Corasick is an exact, multi-pattern string matching algorithm which performs the search in a time linearlymore » proportional to the length of the input text independently from pattern set size. However, depending on the imple- mentation, when the number of patterns increase, the memory occupation may raise drastically. In turn, this can lead to significant variability in the performance, due to the memory access times and the caching effects. This is a significant concern for many mission critical applications and modern high performance architectures. For example, security applications such as Network Intrusion Detection Systems (NIDS), must be able to scan network traffic against very large dictionaries in real time. Modern Ethernet links reach up to 10 Gbps, and malicious threats are already well over 1 million, and expo- nentially growing [28]. When performing the search, a NIDS should not slow down the network, or let network packets pass unchecked. Nevertheless, on the current state-of-the-art cache based processors, there may be a large per- formance variability when dealing with big dictionaries and inputs that have different frequencies of matching patterns. In particular, when few patterns are matched and they are all in the cache, the procedure is fast. Instead, when they are not in the cache, often because many patterns are matched and the caches are continuously thrashed, they should be retrieved from the system memory and the procedure is slowed down by the increased latency. Efficient implementations of string matching algorithms have been the fo- cus of several works, targeting Field Programmable Gate Arrays [4, 25, 15, 5], highly multi-threaded solutions like the Cray XMT [34], multicore proces- sors [19] or heterogeneous processors like the Cell Broadband Engine [35, 22]. Recently, several researchers have also started to investigate the use Graphic Processing Units (GPUs) for string matching algorithms in security applica- tions [20, 10, 32, 33]. Most of these approaches mainly focus on reaching high peak performance, or try to optimize the memory occupation, rather than looking at performance stability. However, hardware solutions supports only small dictionary sizes due to lack of memory and are difficult to customize, while platforms such as the Cell/B.E. are very complex to program.« less

  12. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  13. Stress and multiple memory systems: from 'thinking' to 'doing'.

    PubMed

    Schwabe, Lars; Wolf, Oliver T

    2013-02-01

    Although it has been known for decades that stress influences memory performance, it was only recently shown that stress may alter the contribution of multiple, anatomically and functionally distinct memory systems to behavior. Here, we review recent animal and human studies demonstrating that stress promotes a shift from flexible 'cognitive' to rather rigid 'habit' memory systems and discuss, based on recent neuroimaging data in humans, the underlying brain mechanisms. We argue that, despite being generally adaptive, this stress-induced shift towards 'habit' memory may, in vulnerable individuals, be a risk factor for psychopathology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Impaired Memory Retrieval Correlates with Individual Differences in Cortisol Response but Not Autonomic Response

    ERIC Educational Resources Information Center

    Tranel, Daniel; Adolphs, Ralph; Buchanan, Tony W.

    2006-01-01

    Stress can enhance or impair memory performance. Both cortisol release and sympathetic nervous system responses have been implicated in these differential effects. Here we investigated how memory retrieval might be affected by stress-induced cortisol release, independently of sympathetic nervous system stress responses. Thirty-two healthy…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allada, Veerendra, Benjegerdes, Troy; Bode, Brett

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as themore » workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.« less

  16. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    NASA Astrophysics Data System (ADS)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  17. Long-term memory-based control of attention in multi-step tasks requires working memory: evidence from domain-specific interference

    PubMed Central

    Foerster, Rebecca M.; Carbone, Elena; Schneider, Werner X.

    2014-01-01

    Evidence for long-term memory (LTM)-based control of attention has been found during the execution of highly practiced multi-step tasks. However, does LTM directly control for attention or are working memory (WM) processes involved? In the present study, this question was investigated with a dual-task paradigm. Participants executed either a highly practiced visuospatial sensorimotor task (speed stacking) or a verbal task (high-speed poem reciting), while maintaining visuospatial or verbal information in WM. Results revealed unidirectional and domain-specific interference. Neither speed stacking nor high-speed poem reciting was influenced by WM retention. Stacking disrupted the retention of visuospatial locations, but did not modify memory performance of verbal material (letters). Reciting reduced the retention of verbal material substantially whereas it affected the memory performance of visuospatial locations to a smaller degree. We suggest that the selection of task-relevant information from LTM for the execution of overlearned multi-step tasks recruits domain-specific WM. PMID:24847304

  18. Symbolic Model of Perception in Dynamic 3D Environments

    DTIC Science & Technology

    2006-11-01

    can retrieve memories , work on goals, recognize visual or aural percepts, and perform actions. ACT-R has been selected for the current...types of memory . Procedural memory is the store of condition- action productions that are selected and executed by the core production system...a declarative memory chunk that is made available to the core production system through the vision module . 4 The vision module has been

  19. Social influence on associative learning: double dissociation in high-functioning autism, early-stage behavioural variant frontotemporal dementia and Alzheimer's disease.

    PubMed

    Kéri, Szabolcs

    2014-05-01

    Most of our learning activity takes place in a social context. I examined how social interactions influence associative learning in neurodegenerative diseases and atypical neurodevelopmental conditions primarily characterised by social cognitive and memory dysfunctions. Participants were individuals with high-functioning autism (HFA, n = 18), early-stage behavioural variant frontotemporal dementia (bvFTD, n = 16) and Alzheimer's disease (AD, n = 20). The leading symptoms in HFA and bvFTD were social and behavioural dysfunctions, whereas AD was characterised by memory deficits. Participants received three versions of a paired associates learning task. In the game with boxes test, objects were hidden in six candy boxes placed in different locations on the computer screen. In the game with faces, each box was labelled by a photo of a person. In the real-life version of the game, participants played with real persons. Individuals with HFA and bvFTD performed well in the computer games, but failed on the task including real persons. In contrast, in patients with early-stage AD, social interactions boosted paired associates learning up to the level of healthy control volunteers. Worse performance in the real life game was associated with less successful recognition of complex emotions and mental states in the Reading the Mind in the Eyes Test. Spatial span did not affect the results. When social cognition is impaired, but memory systems are less compromised (HFA and bvFTD), real-life interactions disrupt associative learning; when disease process impairs memory systems but social cognition is relatively intact (early-stage AD), social interactions have a beneficial effect on learning and memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Reversal of age-related learning deficiency by the vertebrate PACAP and IGF-1 in a novel invertebrate model of aging: the pond snail (Lymnaea stagnalis).

    PubMed

    Pirger, Zsolt; Naskar, Souvik; László, Zita; Kemenes, György; Reglődi, Dóra; Kemenes, Ildikó

    2014-11-01

    With the increase of life span, nonpathological age-related memory decline is affecting an increasing number of people. However, there is evidence that age-associated memory impairment only suspends, rather than irreversibly extinguishes, the intrinsic capacity of the aging nervous system for plasticity (1). Here, using a molluscan model system, we show that the age-related decline in memory performance can be reversed by administration of the pituitary adenylate cyclase activating polypeptide (PACAP). Our earlier findings showed that a homolog of the vertebrate PACAP38 and its receptors exist in the pond snail (Lymnaea stagnalis) brain (2), and it is both necessary and instructive for memory formation after reward conditioning in young animals (3). Here we show that exogenous PACAP38 boosts memory formation in aged Lymnaea, where endogenous PACAP38 levels are low in the brain. Treatment with insulin-like growth factor-1, which in vertebrates was shown to transactivate PACAP type I (PAC1) receptors (4) also boosts memory formation in aged pond snails. Due to the evolutionarily conserved nature of these polypeptides and their established role in memory and synaptic plasticity, there is a very high probability that they could also act as "memory rejuvenating" agents in humans. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America.

  1. Applications Performance on NAS Intel Paragon XP/S - 15#

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)

    1994-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran coded. We found that the measured performance of assembly-coded BLAS is much less than what memory bandwidth limitation would predict. The influence of data cache on different sizes of vectors is also investigated using one-dimensional FFTs. c. Impact of processor layout: There are several different ways processors can be laid out within the two-dimensional grid of processors on the Paragon. We have used the FFT example to investigate performance differences based on processors layout.

  2. A Framework for Cognitive Interventions Targeting Everyday Memory Performance and Memory Self-efficacy

    PubMed Central

    McDougall, Graham J.

    2009-01-01

    The human brain has the potential for self-renewal through adult neurogenesis, which is the birth of new neurons. Neural plasticity implies that the nervous system can change and grow. This understanding has created new possibilities for cognitive enhancement and rehabilitation. However, as individuals age, they have decreased confidence, or memory self-efficacy, which is directly related to their everyday memory performance. In this article, a developmental account of studies about memory self-efficacy and nonpharmacologic cognitive intervention models is presented and a cognitive intervention model, called the cognitive behavioral model of everyday memory, is proposed. PMID:19065089

  3. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  4. NRAM: a disruptive carbon-nanotube resistance-change memory.

    PubMed

    Gilmer, D C; Rueckes, T; Cleveland, L

    2018-04-03

    Advanced memory technology based on carbon nanotubes (CNTs) (NRAM) possesses desired properties for implementation in a host of integrated systems due to demonstrated advantages of its operation including high speed (nanotubes can switch state in picoseconds), high endurance (over a trillion), and low power (with essential zero standby power). The applicable integrated systems for NRAM have markets that will see compound annual growth rates (CAGR) of over 62% between 2018 and 2023, with an embedded systems CAGR of 115% in 2018-2023 (http://bccresearch.com/pressroom/smc/bcc-research-predicts:-nram-(finally)-to-revolutionize-computer-memory). These opportunities are helping drive the realization of a shift from silicon-based to carbon-based (NRAM) memories. NRAM is a memory cell made up of an interlocking matrix of CNTs, either touching or slightly separated, leading to low or higher resistance states respectively. The small movement of atoms, as opposed to moving electrons for traditional silicon-based memories, renders NRAM with a more robust endurance and high temperature retention/operation which, along with high speed/low power, is expected to blossom in this memory technology to be a disruptive replacement for the current status quo of DRAM (dynamic RAM), SRAM (static RAM), and NAND flash memories.

  5. NRAM: a disruptive carbon-nanotube resistance-change memory

    NASA Astrophysics Data System (ADS)

    Gilmer, D. C.; Rueckes, T.; Cleveland, L.

    2018-04-01

    Advanced memory technology based on carbon nanotubes (CNTs) (NRAM) possesses desired properties for implementation in a host of integrated systems due to demonstrated advantages of its operation including high speed (nanotubes can switch state in picoseconds), high endurance (over a trillion), and low power (with essential zero standby power). The applicable integrated systems for NRAM have markets that will see compound annual growth rates (CAGR) of over 62% between 2018 and 2023, with an embedded systems CAGR of 115% in 2018-2023 (http://bccresearch.com/pressroom/smc/bcc-research-predicts:-nram-(finally)-to-revolutionize-computer-memory). These opportunities are helping drive the realization of a shift from silicon-based to carbon-based (NRAM) memories. NRAM is a memory cell made up of an interlocking matrix of CNTs, either touching or slightly separated, leading to low or higher resistance states respectively. The small movement of atoms, as opposed to moving electrons for traditional silicon-based memories, renders NRAM with a more robust endurance and high temperature retention/operation which, along with high speed/low power, is expected to blossom in this memory technology to be a disruptive replacement for the current status quo of DRAM (dynamic RAM), SRAM (static RAM), and NAND flash memories.

  6. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  7. Working Memory, Short-Term Memory, and Naming Speed as Predictors of Children's Mathematical Performance

    ERIC Educational Resources Information Center

    Swanson, Lee; Kim, Kenny

    2007-01-01

    Working memory (WM) has been associated with the acquisition of arithmetic skills, however, the components of WM that underlie this acquisition have not been explored. This study explored the contribution of two WM systems (the phonological loop and the central executive) to mathematical performance in young children. The results showed that a…

  8. Effects of the Knowledge Base on Children's Rehearsal and Organizational Strategies.

    ERIC Educational Resources Information Center

    Ornstein, Peter A.; Naus, Mary J.

    In addition to the important role of memory strategies in mediating age changes in recall performance, it is clear that the permanent memory system (or information available in the knowledge base) exerts a significant influence on the acquisition and retention of information. Age changes in memory performance will be fully understood only through…

  9. Declarative verbal memory impairments in middle-aged women who are caregivers of offspring with autism spectrum disorders: The role of negative affect and testosterone.

    PubMed

    Romero-Martínez, A; González-Bono, E; Salvador, A; Moya-Albiol, L

    2016-01-01

    Caring for offspring diagnosed with a chronic psychological disorder such as autism spectrum disorder (ASD) is used in research as a model of chronic stress. This chronic stress has been reported to have deleterious effects on caregivers' cognition, particularly in verbal declarative memory. Moreover, such cognitive decline may be mediated by testosterone (T) levels and negative affect, understood as depressive mood together with high anxiety and anger. This study aimed to compare declarative memory function in middle-aged women who were caregivers for individuals with ASD (n = 24; mean age = 45) and female controls (n = 22; mean age = 45), using a standardised memory test (Rey's Auditory Verbal Learning Test). It also sought to examine the role of care recipient characteristics, negative mood and T levels in memory impairments. ASD caregivers were highly sensitive to proactive interference and verbal forgetting. In addition, they had higher negative affect and T levels, both of which have been associated with poorer verbal memory performance. Moreover, the number of years of caregiving affected memory performance and negative affect, especially, in terms of anger feelings. On the other hand, T levels in caregivers had a curvilinear relationship with verbal memory performance; that is, increases in T were associated with improvements in verbal memory performance up to a certain point, but subsequently, memory performance decreased with increasing T. Chronic stress may produce disturbances in mood and hormonal levels, which in turn might increase the likelihood of developing declarative memory impairments although caregivers do not show a generalised decline in memory. These findings should be taken into account for understanding the impact of cognitive impairments on the ability to provide optimal caregiving.

  10. Individual prediction of change in delayed recall of prose passages after left-sided anterior temporal lobectomy.

    PubMed

    Jokeit, H; Ebner, A; Holthausen, H; Markowitsch, H J; Moch, A; Pannek, H; Schulz, R; Tuxhorn, I

    1997-08-01

    Prognostic variables for individual memory outcome after left anterior temporal lobectomy (ATL) were studied in 27 patients with refractory temporal lobe epilepsy. The difference between pre- and postoperative performance in the delayed recall of two prose passages (Story A and B) from the Wechsler Memory Scale served as measure of postoperative memory change. Fifteen independent clinical, neuropsychological, and electrophysiological variables were submitted to a multiple linear regression analysis. Preoperative immediate and delayed recall of story content and right hemisphere Wada memory performance for pictorial and verbal items explained very well postoperative memory changes in recall of Story B. Delayed recall of Story B, but not of Story A, had high concurrent validity to other measures of memory. Patients who became seizure-free did not differ in memory change from patients who continued to have seizures after ATL. The variables age at epilepsy onset and probable age at temporal lobe damage provided complementary information for individual prediction but with less effectiveness than Wada test data. Our model confirmed that good preoperative memory functioning and impaired right hemispheric Wada memory performance for pictorial items predict a high risk of memory loss after left ATL. The analyses demonstrate that the combination of independent measures delivers more information than Wada test performance or any other variable alone. The suggested function can be used routinely to estimate the individual severity of verbal episodic memory impairment that might occur after left-sided ATL and offers a rational basis for the counseling of patients.

  11. Weather prediction using a genetic memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanaerva's sparse distributed memory (SDM) is an associative memory model based on the mathematical properties of high dimensional binary address spaces. Holland's genetic algorithms are a search technique for high dimensional spaces inspired by evolutional processes of DNA. Genetic Memory is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. This architecture is designed to maximize the ability of the system to scale-up to handle real world problems.

  12. The storage system of PCM based on random access file system

    NASA Astrophysics Data System (ADS)

    Han, Wenbing; Chen, Xiaogang; Zhou, Mi; Li, Shunfen; Li, Gezi; Song, Zhitang

    2016-10-01

    Emerging memory technologies such as Phase change memory (PCM) tend to offer fast, random access to persistent storage with better scalability. It's a hot topic of academic and industrial research to establish PCM in storage hierarchy to narrow the performance gap. However, the existing file systems do not perform well with the emerging PCM storage, which access storage medium via a slow, block-based interface. In this paper, we propose a novel file system, RAFS, to bring about good performance of PCM, which is built in the embedded platform. We attach PCM chips to the memory bus and build RAFS on the physical address space. In the proposed file system, we simplify traditional system architecture to eliminate block-related operations and layers. Furthermore, we adopt memory mapping and bypassed page cache to reduce copy overhead between the process address space and storage device. XIP mechanisms are also supported in RAFS. To the best of our knowledge, we are among the first to implement file system on real PCM chips. We have analyzed and evaluated its performance with IOZONE benchmark tools. Our experimental results show that the RAFS on PCM outperforms Ext4fs on SDRAM with small record lengths. Based on DRAM, RAFS is significantly faster than Ext4fs by 18% to 250%.

  13. Iconic memory requires attention

    PubMed Central

    Persuh, Marjan; Genzer, Boris; Melara, Robert D.

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389

  14. Iconic memory requires attention.

    PubMed

    Persuh, Marjan; Genzer, Boris; Melara, Robert D

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.

  15. Interfacing a high performance disk array file server to a Gigabit LAN

    NASA Technical Reports Server (NTRS)

    Seshan, Srinivasan; Katz, Randy H.

    1993-01-01

    Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface.

  16. Multi-range force sensors utilizing shape memory alloys

    DOEpatents

    Varma, Venugopal K.

    2003-04-15

    The present invention provides a multi-range force sensor comprising a load cell made of a shape memory alloy, a strain sensing system, a temperature modulating system, and a temperature monitoring system. The ability of the force sensor to measure contact forces in multiple ranges is effected by the change in temperature of the shape memory alloy. The heating and cooling system functions to place the shape memory alloy of the load cell in either a low temperature, low strength phase for measuring small contact forces, or a high temperature, high strength phase for measuring large contact forces. Once the load cell is in the desired phase, the strain sensing system is utilized to obtain the applied contact force. The temperature monitoring system is utilized to ensure that the shape memory alloy is in one phase or the other.

  17. Prospective memory: effects of divided attention on spontaneous retrieval.

    PubMed

    Harrison, Tyler L; Mullet, Hillary G; Whiffen, Katie N; Ousterhout, Hunter; Einstein, Gilles O

    2014-02-01

    We examined the effects of divided attention on the spontaneous retrieval of a prospective memory intention. Participants performed an ongoing lexical decision task with an embedded prospective memory demand, and also performed a divided-attention task during some segments of lexical decision trials. In all experiments, monitoring was highly discouraged, and we observed no evidence that participants engaged monitoring processes. In Experiment 1, performing a moderately demanding divided-attention task (a digit detection task) did not affect prospective memory performance. In Experiment 2, performing a more challenging divided-attention task (random number generation) impaired prospective memory. Experiment 3 showed that this impairment was eliminated when the prospective memory cue was perceptually salient. Taken together, the results indicate that spontaneous retrieval is not automatic and that challenging divided-attention tasks interfere with spontaneous retrieval and not with the execution of a retrieved intention.

  18. A VLSI VAX chip set

    NASA Astrophysics Data System (ADS)

    Johnson, W. N.; Herrick, W. V.; Grundmann, W. J.

    1984-10-01

    For the first time, VLSI technology is used to compress the full functinality and comparable performance of the VAX 11/780 super-minicomputer into a 1.2 M transistor microprocessor chip set. There was no subsetting of the 304 instruction set and the 17 data types, nor reduction in hardware support for the 4 Gbyte virtual memory management architecture. The chipset supports an integral 8 kbyte memory cache, a 13.3 Mbyte/s system bus, and sophisticated multiprocessing. High performance is achieved through microcode optimizations afforded by the large control store, tightly coupled address and data caches, the use of internal and external 32 bit datapaths, the extensive aplication of both microlevel and macrolevel pipelining, and the use of specialized hardware assists.

  19. Neural Network Model For Fast Learning And Retrieval

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Macukow, Bohdan

    1989-05-01

    An approach to learning in a multilayer neural network is presented. The proposed network learns by creating interconnections between the input layer and the intermediate layer. In one of the new storage prescriptions proposed, interconnections are excitatory (positive) only and the weights depend on the stored patterns. In the intermediate layer each mother cell is responsible for one stored pattern. Mutually interconnected neurons in the intermediate layer perform a winner-take-all operation, taking into account correlations between stored vectors. The performance of networks using this interconnection prescription is compared with two previously proposed schemes, one using inhibitory connections at the output and one using all-or-nothing interconnections. The network can be used as a content-addressable memory or as a symbolic substitution system that yields an arbitrarily defined output for any input. The training of a model to perform Boolean logical operations is also described. Computer simulations using the network as an autoassociative content-addressable memory show the model to be efficient. Content-addressable associative memories and neural logic modules can be combined to perform logic operations on highly corrupted data.

  20. Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.

    In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less

  1. Behavioral and neuroanatomical investigation of Highly Superior Autobiographical Memory (HSAM)

    PubMed Central

    LePort, Aurora K.R.; Mattfeld, Aaron T.; Dickinson-Anson, Heather; Fallon, James H.; Stark, Craig E.L.; Kruggel, Frithjof; Cahill, Larry; McGaugh, James L.

    2013-01-01

    A single case study recently documented one woman’s ability to recall accurately vast amounts of autobiographical information, spanning most of her lifetime, without the use of practiced mnemonics (Parker, Cahill, & McGaugh, 2006). The current study reports findings based on eleven participants expressing this same memory ability, now referred to as Highly Superior Autobiographical Memory (HSAM). Participants were identified and subsequently characterized based on screening for memory of public events. They were then tested for personal autobiographical memories as well as for memory assessed by laboratory memory tests. Additionally, whole-brain structural MRI scans were obtained. Results indicated that HSAM participants performed significantly better at recalling public as well as personal autobiographical events as well as the days and dates on which these events occurred. However, their performance was comparable to age- and sex-matched controls on most standard laboratory memory tests. Neuroanatomical results identified nine structures as being morphologically different from those of control participants. The study of HSAM may provide new insights into the neurobiology of autobiographical memory. PMID:22652113

  2. Working memory, math performance, and math anxiety.

    PubMed

    Ashcraft, Mark H; Krause, Jeremy A

    2007-04-01

    The cognitive literature now shows how critically math performance depends on working memory, for any form of arithmetic and math that involves processes beyond simple memory retrieval. The psychometric literature is also very clear on the global consequences of mathematics anxiety. People who are highly math anxious avoid math: They avoid elective coursework in math, both in high school and college, they avoid college majors that emphasize math, and they avoid career paths that involve math. We go beyond these psychometric relationships to examine the cognitive consequences of math anxiety. We show how performance on a standardized math achievement test varies as a function of math anxiety, and that math anxiety compromises the functioning of working memory. High math anxiety works much like a dual task setting: Preoccupation with one's math fears and anxieties functions like a resource-demanding secondary task. We comment on developmental and educational factors related to math and working memory, and on factors that may contribute to the development of math anxiety.

  3. Regional brain activity that determines successful and unsuccessful working memory formation.

    PubMed

    Teramoto, Shohei; Inaoka, Tsubasa; Ono, Yumie

    2016-08-01

    Using EEG source reconstruction with Multiple Sparse Priors (MSP), we investigated the regional brain activity that determines successful memory encoding in two participant groups of high and low accuracy rates. Eighteen healthy young adults performed a sequential fashion of visual Sternberg memory task. The 32-channel EEG was continuously measured during participants performed two 70 trials of memory task. The regional brain activity corresponding to the oscillatory EEG activity in the alpha band (8-13 Hz) during encoding period was analyzed by MSP implemented in SPM8. We divided the data of all participants into 2 groups (low- and highperformance group) and analyzed differences in regional brain activity between trials in which participants answered correctly and incorrectly within each of the group. Participants in low-performance group showed significant activity increase in the visual cortices in their successful trials compared to unsuccessful ones. On the other hand, those in high-performance group showed a significant activity increase in widely distributed cortical regions in the frontal, temporal, and parietal areas including those suggested as Baddeley's working memory model. Further comparison of activated cortical volumes and mean current source intensities within the cortical regions of Baddeley's model during memory encoding demonstrated that participants in high-performance group showed enhanced activity in the right premotor cortex, which plays an important role in maintaining visuospatial attention, compared to those in low performance group. Our results suggest that better ability in memory encoding is associated with distributed and stronger regional brain activities including the premotor cortex, possibly indicating efficient allocation of cognitive load and maintenance of attention.

  4. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov Websites

    requiring high CPU utilization or large amounts of memory should be run on the worker nodes. WinHPC02 is not associated data are removed when NREL worker status is discontinued. Users should make arrangements to save other users. Licenses are returned to the license pool when other users close the application or after

  5. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    PubMed Central

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2012-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers’ capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. PMID:20677899

  6. Using NVMe Gen3 PCIe SSD Cards in High-density Servers for High-performance Big Data Transfer Over Multiple Network Channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin

    This Technical Note describes how the Zettar team came up with a data transfer cluster design that convincingly proved the feasibility of using high-density servers for high-performance Big Data transfers. It then outlines the tests, operations, and observations that address a potential over-heating concern regarding the use of Non-Volatile Memory Host Controller Interface Specification (NVMHCI aka NVM Express or NVMe) Gen 3 PCIe SSD cards in high-density servers. Finally, it points out the possibility of developing a new generation of high-performance Science DMZ data transfer system for the data-intensive research community and commercial enterprises.

  7. Menstrual cycle phase effects on memory and Stroop task performance.

    PubMed

    Hatta, Takeshi; Nagaya, Keiko

    2009-10-01

    The present study examined differences in Stroop and memory task performances modulated by gonadal steroid hormones during the menstrual cycle in women. Thirty women with regular menstrual cycles performed a logical memory task (Wechsler Memory Scale) and the Stroop task. The results showed a significant difference in Stroop task performance between low and high levels of estradiol and progesterone during the menstrual cycle, but there was no significant difference in memory performance between the two phases, nor was there any significant mood change that might have influenced cognitive performance. These findings suggest that sex-related hormone modulation selectively affects cognitive functions depending on the type of task and low level secretion of estradiol appears to contribute to reducing the level of attention that relates to the prefrontal cortex.

  8. Memory and learning behaviors mimicked in nanogranular SiO2-based proton conductor gated oxide-based synaptic transistors

    NASA Astrophysics Data System (ADS)

    Wan, Chang Jin; Zhu, Li Qiang; Zhou, Ju Mei; Shi, Yi; Wan, Qing

    2013-10-01

    In neuroscience, signal processing, memory and learning function are established in the brain by modifying ionic fluxes in neurons and synapses. Emulation of memory and learning behaviors of biological systems by nanoscale ionic/electronic devices is highly desirable for building neuromorphic systems or even artificial neural networks. Here, novel artificial synapses based on junctionless oxide-based protonic/electronic hybrid transistors gated by nanogranular phosphorus-doped SiO2-based proton-conducting films are fabricated on glass substrates by a room-temperature process. Short-term memory (STM) and long-term memory (LTM) are mimicked by tuning the pulse gate voltage amplitude. The LTM process in such an artificial synapse is due to the proton-related interfacial electrochemical reaction. Our results are highly desirable for building future neuromorphic systems or even artificial networks via electronic elements.In neuroscience, signal processing, memory and learning function are established in the brain by modifying ionic fluxes in neurons and synapses. Emulation of memory and learning behaviors of biological systems by nanoscale ionic/electronic devices is highly desirable for building neuromorphic systems or even artificial neural networks. Here, novel artificial synapses based on junctionless oxide-based protonic/electronic hybrid transistors gated by nanogranular phosphorus-doped SiO2-based proton-conducting films are fabricated on glass substrates by a room-temperature process. Short-term memory (STM) and long-term memory (LTM) are mimicked by tuning the pulse gate voltage amplitude. The LTM process in such an artificial synapse is due to the proton-related interfacial electrochemical reaction. Our results are highly desirable for building future neuromorphic systems or even artificial networks via electronic elements. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr02987e

  9. The contribution of attentional lapses to individual differences in visual working memory capacity.

    PubMed

    Adam, Kirsten C S; Mance, Irida; Fukuda, Keisuke; Vogel, Edward K

    2015-08-01

    Attentional control and working memory capacity are important cognitive abilities that substantially vary between individuals. Although much is known about how attentional control and working memory capacity relate to each other and to constructs like fluid intelligence, little is known about how trial-by-trial fluctuations in attentional engagement impact trial-by-trial working memory performance. Here, we employ a novel whole-report memory task that allowed us to distinguish between varying levels of attentional engagement in humans performing a working memory task. By characterizing low-performance trials, we can distinguish between models in which working memory performance failures are caused by either (1) complete lapses of attention or (2) variations in attentional control. We found that performance failures increase with set-size and strongly predict working memory capacity. Performance variability was best modeled by an attentional control model of attention, not a lapse model. We examined neural signatures of performance failures by measuring EEG activity while participants performed the whole-report task. The number of items correctly recalled in the memory task was predicted by frontal theta power, with decreased frontal theta power associated with poor performance on the task. In addition, we found that poor performance was not explained by failures of sensory encoding; the P1/N1 response and ocular artifact rates were equivalent for high- and low-performance trials. In all, we propose that attentional lapses alone cannot explain individual differences in working memory performance. Instead, we find that graded fluctuations in attentional control better explain the trial-by-trial differences in working memory that we observe.

  10. The Contribution of Attentional Lapses to Individual Differences in Visual Working Memory Capacity

    PubMed Central

    Adam, Kirsten C. S.; Mance, Irida; Fukuda, Keisuke; Vogel, Edward K.

    2015-01-01

    Attentional control and working memory capacity are important cognitive abilities that substantially vary between individuals. Although much is known about how attentional control and working memory capacity relate to each other and to constructs like fluid intelligence, little is known about how trial-by-trial fluctuations in attentional engagement impact trial-by-trial working memory performance. Here, we employ a novel whole-report memory task that allowed us to distinguish between varying levels of attentional engagement in humans performing a working memory task. By characterizing low-performance trials, we can distinguish between models in which working memory performance failures are caused by either (1) complete lapses of attention or (2) variations in attentional control. We found that performance failures increase with set-size and strongly predict working memory capacity. Performance variability was best modeled by an attentional control model of attention, not a lapse model. We examined neural signatures of performance failures by measuring EEG activity while participants performed the whole-report task. The number of items correctly recalled in the memory task was predicted by frontal theta power, with decreased frontal theta power associated with poor performance on the task. In addition, we found that poor performance was not explained by failures of sensory encoding; the P1/N1 response and ocular artifact rates were equivalent for high- and low-performance trials. In all, we propose that attentional lapses alone cannot explain individual differences in working memory performance. Instead, we find that graded fluctuations in attentional control better explain the trial-by-trial differences in working memory that we observe. PMID:25811710

  11. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  12. Spatial memory: a Rosetta stone for rat and human hippocampal discourse: Theoretical comment on Goodrich-Hunsaker and Hopkins (2010).

    PubMed

    Sutherland, Robert J

    2010-06-01

    The article by Goodrich-Hunsaker and Hopkins (2010, this issue) takes up an important place among in the recent contributions on the role of the hippocampus in memory. They evaluate the effect of bilateral damage to the hippocampus on performance by human participants in a virtual 8-arm radial maze. The hippocampal damage appears to be highly selective and nearly complete. Exactly as with selective hippocampal damage in rats, the human participants showed a deficit in accurately choosing rewarded versus never-rewarded arms and a deficit in avoiding reentering recently visited arms. The results are triply significant: (1) They provide good support for the idea that the wealth of neurobiological information, from network to synapse to gene, on spatial memory in the rat may apply as a whole to the human hippocampal memory system; (2) They affirm the utility of human virtual task models of rat spatial memory tasks; (3) They support one interpretation of the dampening of the hippocampal functional MRI (fMRI) blood oxygen level-dependent (BOLD) signal during performance of the virtual radial arm maze observed by Astur et al. (2005).

  13. An exploratory study of phonological awareness and working memory differences and literacy performance of people that use AAC.

    PubMed

    Gómez Taibo, María Luisa; Vieiro Iglesias, Pilar; González Raposo, María del Salvador; Sotillo Méndez, María

    2010-11-01

    Twelve cerebral palsied adolescents and young adults with complex communicative needs who used augmentative and alternative communication were studied. They were classified according to their working memory capacity (high vs. low) into two groups of 6 participants. They were also divided into two groups of 6 participants according to their high vs. low phonological skills. These groups were compared on their performance in reading tests -orthographic knowledge, a word test and a pseudoword reading test- and in the spelling of words, pseudowords and pictures' names. Statistical differences were found between high vs. low phonological skills groups, and between high and low working memory groups. High working memory capacity group scored significantly higher than low working memory group in the orthographic and word reading tests. The high phonological skills group outperformed the low phonological skills group in the word reading test and in the spelling of pseudowords and pictures' names. From a descriptive point of view, phonological skills and working memory, factors known to be highly predictive of literacy skills in people without disabilities, also hold as factors for the participants that used AAC in our study. Implications of the results are discussed.

  14. Noise reduction in optically controlled quantum memory

    NASA Astrophysics Data System (ADS)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  15. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  16. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    PubMed

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  17. Advanced Mail Systems Scanner Technology. Executive Summary and Appendixes A-E.

    DTIC Science & Technology

    1980-10-01

    data base. 6. Perform color acquisition studies. 7. Investigate address and bar code reading. MASS MEMORY TECHNOLOGY 1. Collect performance data on...area of the 1728-by-2200 ICAS image memory and to transmit the data to any of the three color memories of the Comtal. Function table information can...for printing color images. The software allows the transmission of data from the ICAS frame-store memory via the MCU to the Dicomed. Software test

  18. Emotion processing facilitates working memory performance.

    PubMed

    Lindström, Björn R; Bohlin, Gunilla

    2011-11-01

    The effect of emotional stimulus content on working memory performance has been investigated with conflicting results, as both emotion-dependent facilitation and impairments are reported in the literature. To clarify this issue, 52 adult participants performed a modified visual 2-back task with highly arousing positive stimuli (sexual scenes), highly arousing negative stimuli (violent death) and low-arousal neutral stimuli. Emotional stimulus processing was found to facilitate task performance relative to that of neutral stimuli, both in regards to response accuracy and reaction times. No emotion-dependent differences in false-alarm rates were found. These results indicate that emotional information can have a facilitating effect on working memory maintenance and processing of information.

  19. Subthreshold pharmacological and genetic approaches to analyzing CaV2.1-mediated NMDA receptor signaling in short-term memory.

    PubMed

    Takahashi, Eiki; Niimi, Kimie; Itakura, Chitoshi

    2010-10-25

    Ca(V)2.1 is highly expressed in the nervous system and plays an essential role in the presynaptic modulation of neurotransmitter release machinery. Recently, the antiepileptic drug levetiracetam was reported to inhibit presynaptic Ca(V)2.1 functions, reducing glutamate release in the hippocampus, although the precise physiological role of Ca(V)2.1-regulated synaptic functions in cognitive performance at the system level remains unknown. This study examined whether Ca(V)2.1 mediates hippocampus-dependent spatial short-term memory using the object location and Y-maze tests, and perirhinal cortex-dependent nonspatial short-term memory using the object recognition test, via a combined pharmacological and genetic approach. Heterozygous rolling Nagoya (rol/+) mice carrying the Ca(V)2.1alpha(1) mutation had normal spatial and nonspatial short-term memory. A 100mg/kg dose of levetiracetam, which is ineffective in wild-type controls, blocked spatial short-term memory in rol/+ mice. At 5mg/kg, the N-methyl-D-aspartate (NMDA) receptor blocker (+/-)-3-(2-carboxypiperazin-4-yl)-propyl-1-phosphonic acid (CPP), which is ineffective in wild-type controls, also blocked the spatial short-term memory in rol/+ mice. Furthermore, a combination of subthreshold doses of levetiracetam (25 mg/kg) and CPP (2.5mg/kg) triggered a spatial short-term memory deficit in rol/+ mice, but not in wild-type controls. Similar patterns of nonspatial short-term memory were observed in wild-type and rol/+ mice when injected with levetiracetam (0-300 mg/kg). These results indicate that Ca(V)2.1-mediated NMDA receptor signaling is critical in hippocampus-dependent spatial short-term memory and differs in various regions. The combination subthreshold pharmacological and genetic approach presented here is easily performed and can be used to study functional signaling pathways in neuronal circuits. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Neural activity in the hippocampus predicts individual visual short-term memory capacity.

    PubMed

    von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter

    2013-07-01

    Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period  =  900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity  =  3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity  =  5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.

  1. Impact of Recent Hardware and Software Trends on High Performance Transaction Processing and Analytics

    NASA Astrophysics Data System (ADS)

    Mohan, C.

    In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.

  2. On the Floating Point Performance of the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1997-01-01

    The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.

  3. Effects of domain knowledge, working memory capacity, and age on cognitive performance: an investigation of the knowledge-is-power hypothesis.

    PubMed

    Hambrick, David Z; Engle, Randall W

    2002-06-01

    Domain knowledge facilitates performance in many cognitive tasks. However, very little is known about the interplay between domain knowledge and factors that are believed to reflect general, and relatively stable, characteristics of the individual. The primary goal of this study was to investigate the interplay between domain knowledge and one such factor: working memory capacity. Adults from wide ranges of working memory capacity, age, and knowledge about the game of baseball listened to, and then answered questions about, simulated radio broadcasts of baseball games. There was a strong facilitative effect of preexisting knowledge of baseball on memory performance, particularly for information judged to be directly relevant to the baseball games. However, there was a positive effect of working memory capacity on memory performance as well, and there was no indication that domain knowledge attenuated this effect. That is, working memory capacity contributed to memory performance even at high levels of domain knowledge. Similarly, there was no evidence that domain knowledge attenuated age-related differences (favoring young adults) in memory performance. We discuss implications of the results for understanding proficiency in cognitive domains from an individual-differences perspective. Copyright 2001 Elsevier Science (USA).

  4. Music-related reward responses predict episodic memory performance.

    PubMed

    Ferreri, Laura; Rodriguez-Fornells, Antoni

    2017-12-01

    Music represents a special type of reward involving the recruitment of the mesolimbic dopaminergic system. According to recent theories on episodic memory formation, as dopamine strengthens the synaptic potentiation produced by learning, stimuli triggering dopamine release could result in long-term memory improvements. Here, we behaviourally test whether music-related reward responses could modulate episodic memory performance. Thirty participants rated (in terms of arousal, familiarity, emotional valence, and reward) and encoded unfamiliar classical music excerpts. Twenty-four hours later, their episodic memory was tested (old/new recognition and remember/know paradigm). Results revealed an influence of music-related reward responses on memory: excerpts rated as more rewarding were significantly better recognized and remembered. Furthermore, inter-individual differences in the ability to experience musical reward, measured through the Barcelona Music Reward Questionnaire, positively predicted memory performance. Taken together, these findings shed new light on the relationship between music, reward and memory, showing for the first time that music-driven reward responses are directly implicated in higher cognitive functions and can account for individual differences in memory performance.

  5. The ILLIAC IV memory system: Current status and future possibilities

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    The future needs of researchers who will use the Illiac were examined and the requirements they will place on the memory system were evaluated. Various alternatives to replacing critical memory components were considered with regard to cost, risk, system impact, software requirements, and implementation schedules. The current system, its performance and status, and the limitations it places on possible enhancements are discussed as well as the planned enhancements to the Illiac processor. After a brief technology survey, different implementations are presented for each system memory component. Three different memory systems are proposed to meet the identified needs of the Illiac user community. These three alternatives differ considerably with respect to storage capacity and accessing capabilities, but they all offer significant improvements over the current system. The proposed systems and their relative merits are analyzed.

  6. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  7. Facing the future: Memory as an evolved system for planning future acts

    PubMed Central

    Klein, Stanley B.; Robertson, Theresa E.; Delton, Andrew W.

    2013-01-01

    All organisms capable of long-term memory are necessarily oriented toward the future. We propose that one of the most important adaptive functions of long-term episodic memory is to store information about the past in the service of planning for the personal future. Because a system should have especially efficient performance when engaged in a task that makes maximal use of its evolved machinery, we predicted that future-oriented planning would result in especially good memory relative to other memory tasks. We tested recall performance of a word list, using encoding tasks with different temporal perspectives (e.g., past, future) but a similar context. Consistent with our hypothesis, future-oriented encoding produced superior recall. We discuss these findings in light of their implications for the thesis that memory evolved to enable its possessor to anticipate and respond to future contingencies that cannot be known with certainty. PMID:19966234

  8. Distributed state-space generation of discrete-state stochastic models

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David

    1995-01-01

    High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.

  9. Divergence in Morris Water Maze-Based Cognitive Performance under Chronic Stress Is Associated with the Hippocampal Whole Transcriptomic Modification in Mice

    PubMed Central

    Jung, Seung H.; Brownlow, Milene L.; Pellegrini, Matteo; Jankord, Ryan

    2017-01-01

    Individual susceptibility determines the magnitude of stress effects on cognitive function. The hippocampus, a brain region of memory consolidation, is vulnerable to stressful environments, and the impact of stress on hippocampus may determine individual variability in cognitive performance. Therefore, the purpose of this study was to define the relationship between the divergence in spatial memory performance under chronically unpredictable stress and an associated transcriptomic alternation in hippocampus, the brain region of spatial memory consolidation. Multiple strains of BXD (B6 × D2) recombinant inbred mice went through a 4-week chronic variable stress (CVS) paradigm, and the Morris water maze (MWM) test was conducted during the last week of CVS to assess hippocampal-dependent spatial memory performance and grouped animals into low and high performing groups based on the cognitive performance. Using hippocampal whole transcriptome RNA-sequencing data, differential expression, PANTHER analysis, WGCNA, Ingenuity's upstream regulator analysis in the Ingenuity Pathway Analysis® and phenotype association analysis were conducted. Our data identified multiple genes and pathways that were significantly associated with chronic stress-associated cognitive modification and the divergence in hippocampal dependent memory performance under chronic stress. Biological pathways associated with memory performance following chronic stress included metabolism, neurotransmitter and receptor regulation, immune response and cellular process. The Ingenuity's upstream regulator analysis identified 247 upstream transcriptional regulators from 16 different molecule types. Transcripts predictive of cognitive performance under high stress included genes that are associated with a high occurrence of Alzheimer's and cognitive impairments (e.g., Ncl, Eno1, Scn9a, Slc19a3, Ncstn, Fos, Eif4h, Copa, etc.). Our results show that the variable effects of chronic stress on the hippocampal transcriptome are related to the ability to complete the MWM task and that the modulations of specific pathways are indicative of hippocampal dependent memory performance. Thus, the divergence in spatial memory performance following chronic stress is related to the unique pattern of gene expression within the hippocampus. PMID:28912681

  10. High-Performance Nonvolatile Organic Field-Effect Transistor Memory Based on Organic Semiconductor Heterostructures of Pentacene/P13/Pentacene as Both Charge Transport and Trapping Layers.

    PubMed

    Li, Wen; Guo, Fengning; Ling, Haifeng; Zhang, Peng; Yi, Mingdong; Wang, Laiyuan; Wu, Dequn; Xie, Linghai; Huang, Wei

    2017-08-01

    Nonvolatile organic field-effect transistor (OFET) memory devices based on pentacene/ N , N '-ditridecylperylene-3,4,9,10-tetracarboxylic diimide (P13)/pentacene trilayer organic heterostructures have been proposed. The discontinuous n-type P13 embedded in p-type pentacene layers can not only provide electrons in the semiconductor layer that facilitates electron trapping process; it also works as charge trapping sites, which is attributed to the quantum well-like pentacene/P13/pentacene organic heterostructures. The synergistic effects of charge trapping in the discontinuous P13 and the charge-trapping property of the poly(4-vinylphenol) (PVP) layer remarkably improve the memory performance. In addition, the trilayer organic heterostructures have also been successfully applied to multilevel and flexible nonvolatile memory devices. The results provide a novel design strategy to achieve high-performance nonvolatile OFET memory devices and allow potential applications for different combinations of various organic semiconductor materials in OFET memory.

  11. Optimization of a PCRAM Chip for high-speed read and highly reliable reset operations

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyun; Chen, Houpeng; Li, Xi; Wang, Qian; Fan, Xi; Hu, Jiajun; Lei, Yu; Zhang, Qi; Tian, Zhen; Song, Zhitang

    2016-10-01

    The widely used traditional Flash memory suffers from its performance limits such as its serious crosstalk problems, and increasing complexity of floating gate scaling. Phase change random access memory (PCRAM) becomes one of the most potential nonvolatile memories among the new memory techniques. In this paper, a 1M-bit PCRAM chip is designed based on the SMIC 40nm CMOS technology. Focusing on the read and write performance, two new circuits with high-speed read operation and highly reliable reset operation are proposed. The high-speed read circuit effectively reduces the reading time from 74ns to 40ns. The double-mode reset circuit improves the chip yield. This 1M-bit PCRAM chip has been simulated on cadence. After layout design is completed, the chip will be taped out for post-test.

  12. Is functional integration of resting state brain networks an unspecific biomarker for working memory performance?

    PubMed

    Alavash, Mohsen; Doebler, Philipp; Holling, Heinz; Thiel, Christiane M; Gießing, Carsten

    2015-03-01

    Is there one optimal topology of functional brain networks at rest from which our cognitive performance would profit? Previous studies suggest that functional integration of resting state brain networks is an important biomarker for cognitive performance. However, it is still unknown whether higher network integration is an unspecific predictor for good cognitive performance or, alternatively, whether specific network organization during rest predicts only specific cognitive abilities. Here, we investigated the relationship between network integration at rest and cognitive performance using two tasks that measured different aspects of working memory; one task assessed visual-spatial and the other numerical working memory. Network clustering, modularity and efficiency were computed to capture network integration on different levels of network organization, and to statistically compare their correlations with the performance in each working memory test. The results revealed that each working memory aspect profits from a different resting state topology, and the tests showed significantly different correlations with each of the measures of network integration. While higher global network integration and modularity predicted significantly better performance in visual-spatial working memory, both measures showed no significant correlation with numerical working memory performance. In contrast, numerical working memory was superior in subjects with highly clustered brain networks, predominantly in the intraparietal sulcus, a core brain region of the working memory network. Our findings suggest that a specific balance between local and global functional integration of resting state brain networks facilitates special aspects of cognitive performance. In the context of working memory, while visual-spatial performance is facilitated by globally integrated functional resting state brain networks, numerical working memory profits from increased capacities for local processing, especially in brain regions involved in working memory performance. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Investigation of fast initialization of spacecraft bubble memory systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1984-01-01

    Bubble domain technology offers significant improvement in reliability and functionality for spacecraft onboard memory applications. In considering potential memory systems organizations, minimization of power in high capacity bubble memory systems necessitates the activation of only the desired portions of the memory. In power strobing arbitrary memory segments, a capability of fast turn on is required. Bubble device architectures, which provide redundant loop coding in the bubble devices, limit the initialization speed. Alternate initialization techniques are investigated to overcome this design limitation. An initialization technique using a small amount of external storage is demonstrated.

  14. System-Level Integration of Mass Memory

    NASA Technical Reports Server (NTRS)

    Cox, Brian; Mellstrom, Jeffrey; Wysocky, Terry

    2008-01-01

    A report discusses integrating multiple memory modules on the high-speed serial interconnect (IEEE 1393) that is used by a spacecraft?s inter-module communications in order to ease data congestion and provide for a scalable, strong, flexible system that can meet new system-level mass memory requirements.

  15. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  16. Episodic and working memory deficits in alcoholic Korsakoff patients: the continuity theory revisited.

    PubMed

    Pitel, Anne Lise; Beaunieux, Hélène; Witkowski, Thomas; Vabret, François; de la Sayette, Vincent; Viader, Fausto; Desgranges, Béatrice; Eustache, Francis

    2008-07-01

    The exact nature of episodic and working memory impairments in alcoholic Korsakoff patients (KS) remains unclear, as does the specificity of these neuropsychological deficits compared with those of non-Korsakoff alcoholics (AL). The goals of the present study were therefore to (1) specify the nature of episodic and working memory impairments in KS, (2) determine the specificity of the KS neuropsychological profile compared with the AL profile, and (3) observe the distribution of individual performances within the 2 patient groups. We investigated episodic memory (encoding and retrieval abilities, contextual memory and state of consciousness associated with memories), the slave systems of working memory (phonological loop, visuospatial sketchpad and episodic buffer) and executive functions (inhibition, flexibility, updating and integration abilities) in 14 strictly selected KS, 40 AL and 55 control subjects (CS). Compared with CS, KS displayed impairments of episodic memory encoding and retrieval, contextual memory, recollection, the slave systems of working memory and executive functions. Although episodic memory was more severely impaired in KS than in AL, the single specificity of the KS profile was a disproportionately large encoding deficit. Apart from organizational and updating abilities, the slave systems of working memory and inhibition, flexibility and integration abilities were impaired to the same extent in both alcoholic groups. However, some KS were unable to complete the most difficult executive tasks. There was only a partial overlap of individual performances by KS and AL for episodic memory and a total mixture of the 2 groups for working memory. Korsakoff's syndrome encompasses impairments of the different episodic and working memory components. AL and KS displayed similar profiles of episodic and working memory deficits, in accordance with neuroimaging investigations showing similar patterns of brain damage in both alcoholic groups.

  17. Shape memory system with integrated actuation using embedded particles

    DOEpatents

    Buckley, Patrick R [New York, NY; Maitland, Duncan J [Pleasant Hill, CA

    2009-09-22

    A shape memory material with integrated actuation using embedded particles. One embodiment provides a shape memory material apparatus comprising a shape memory material body and magnetic pieces in the shape memory material body. Another embodiment provides a method of actuating a device to perform an activity on a subject comprising the steps of positioning a shape memory material body in a desired position with regard to the subject, the shape memory material body capable of being formed in a specific primary shape, reformed into a secondary stable shape, and controllably actuated to recover the specific primary shape; including pieces in the shape memory material body; and actuating the shape memory material body using the pieces causing the shape memory material body to be controllably actuated to recover the specific primary shape and perform the activity on the subject.

  18. Shape memory system with integrated actuation using embedded particles

    DOEpatents

    Buckley, Patrick R [New York, NY; Maitland, Duncan J [Pleasant Hill, CA

    2012-05-29

    A shape memory material with integrated actuation using embedded particles. One embodiment provides a shape memory material apparatus comprising a shape memory material body and magnetic pieces in the shape memory material body. Another embodiment provides a method of actuating a device to perform an activity on a subject comprising the steps of positioning a shape memory material body in a desired position with regard to the subject, the shape memory material body capable of being formed in a specific primary shape, reformed into a secondary stable shape, and controllably actuated to recover the specific primary shape; including pieces in the shape memory material body; and actuating the shape memory material body using the pieces causing the shape memory material body to be controllably actuated to recover the specific primary shape and perform the activity on the subject.

  19. Shape memory system with integrated actuation using embedded particles

    DOEpatents

    Buckley, Patrick R.; Maitland, Duncan J.

    2014-04-01

    A shape memory material with integrated actuation using embedded particles. One embodiment provides a shape memory material apparatus comprising a shape memory material body and magnetic pieces in the shape memory material body. Another embodiment provides a method of actuating a device to perform an activity on a subject comprising the steps of positioning a shape memory material body in a desired position with regard to the subject, the shape memory material body capable of being formed in a specific primary shape, reformed into a secondary stable shape, and controllably actuated to recover the specific primary shape; including pieces in the shape memory material body; and actuating the shape memory material body using the pieces causing the shape memory material body to be controllably actuated to recover the specific primary shape and perform the activity on the subject.

  20. Improved Functional Properties and Efficiencies of Nitinol Wires Under High-Performance Shape Memory Effect (HP-SME)

    NASA Astrophysics Data System (ADS)

    Casati, R.; Saghafi, F.; Biffi, C. A.; Vedani, M.; Tuissi, A.

    2017-10-01

    Martensitic Ti-rich NiTi intermetallics are broadly used in various cyclic applications as actuators, which exploit the shape memory effect (SME). Recently, a new approach for exploiting austenitic Ni-rich NiTi shape memory alloys as actuators was proposed and named high-performance shape memory effect (HP-SME). HP-SME is based on thermal recovery of de-twinned martensite produced by mechanical loading of the parent phase. The aim of the manuscript consists in evaluating and comparing the fatigue and actuation properties of austenitic HP-SME wires and conventional martensitic SME wires. The effect of the thermomechanical cycling on the actuation response and the changes in the electrical resistivity of both shape memory materials were studied by performing the actuation tests at different stages of the fatigue life. Finally, the changes in the transition temperatures before and after cycling were also investigated by differential calorimetric tests.

  1. Effects of verbal and nonverbal interference on spatial and object visual working memory.

    PubMed

    Postle, Bradley R; Desposito, Mark; Corkin, Suzanne

    2005-03-01

    We tested the hypothesis that a verbal coding mechanism is necessarily engaged by object, but not spatial, visual working memory tasks. We employed a dual-task procedure that paired n-back working memory tasks with domain-specific distractor trials inserted into each interstimulus interval of the n-back tasks. In two experiments, object n-back performance demonstrated greater sensitivity to verbal distraction, whereas spatial n-back performance demonstrated greater sensitivity to motion distraction. Visual object and spatial working memory may differ fundamentally in that the mnemonic representation of featural characteristics of objects incorporates a verbal (perhaps semantic) code, whereas the mnemonic representation of the location of objects does not. Thus, the processes supporting working memory for these two types of information may differ in more ways than those dictated by the "what/where" organization of the visual system, a fact more easily reconciled with a component process than a memory systems account of working memory function.

  2. Effects of verbal and nonverbal interference on spatial and object visual working memory

    PubMed Central

    POSTLE, BRADLEY R.; D’ESPOSITO, MARK; CORKIN, SUZANNE

    2005-01-01

    We tested the hypothesis that a verbal coding mechanism is necessarily engaged by object, but not spatial, visual working memory tasks. We employed a dual-task procedure that paired n-back working memory tasks with domain-specific distractor trials inserted into each interstimulus interval of the n-back tasks. In two experiments, object n-back performance demonstrated greater sensitivity to verbal distraction, whereas spatial n-back performance demonstrated greater sensitivity to motion distraction. Visual object and spatial working memory may differ fundamentally in that the mnemonic representation of featural characteristics of objects incorporates a verbal (perhaps semantic) code, whereas the mnemonic representation of the location of objects does not. Thus, the processes supporting working memory for these two types of information may differ in more ways than those dictated by the “what/where” organization of the visual system, a fact more easily reconciled with a component process than a memory systems account of working memory function. PMID:16028575

  3. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  4. Feedforward hysteresis compensation in trajectory control of piezoelectrically-driven nanostagers

    NASA Astrophysics Data System (ADS)

    Bashash, Saeid; Jalili, Nader

    2006-03-01

    Complex structural nonlinearities of piezoelectric materials drastically degrade their performance in variety of micro- and nano-positioning applications. From the precision positioning and control perspective, the multi-path time-history dependent hysteresis phenomenon is the most concerned nonlinearity in piezoelectric actuators to be analyzed. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligent properties of hysteresis with the effects of non-local memories are discussed. Through performing a set of experiments on a piezoelectrically-driven nanostager with high resolution capacitive position sensor, it is shown that for the precise prediction of hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the system everpresent nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect if memory units are sufficiently chosen for the inverse model.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, Steven H.; Karlin, Ian; Marinak, Marty M.

    HYDRA is used to simulate a variety of experiments carried out at the National Ignition Facility (NIF) [4] and other high energy density physics facilities. HYDRA has packages to simulate radiation transfer, atomic physics, hydrodynamics, laser propagation, and a number of other physics effects. HYDRA has over one million lines of code and includes both MPI and thread-level (OpenMP and pthreads) parallelism. This paper measures the performance characteristics of HYDRA using hardware counters on an IBM BlueGene/Q system. We report key ratios such as bytes/instruction and memory bandwidth for several different physics packages. The total number of bytes read andmore » written per time step is also reported. We show that none of the packages which use significant time are memory bandwidth limited on a Blue Gene/Q. HYDRA currently issues very few SIMD instructions. The pressure on memory bandwidth will increase if high levels of SIMD instructions can be achieved.« less

  6. The touchscreen operant platform for testing learning and memory in rats and mice

    PubMed Central

    Horner, Alexa E.; Heath, Christopher J.; Hvoslef-Eide, Martha; Kent, Brianne A.; Kim, Chi Hun; Nilsson, Simon R. O.; Alsiö, Johan; Oomen, Charlotte A.; Holmes, Andrew; Saksida, Lisa M.; Bussey, Timothy J.

    2014-01-01

    Summary An increasingly popular method of assessing cognitive functions in rodents is the automated touchscreen platform, on which a number of different cognitive tests can be run in a manner very similar to touchscreen methods currently used to test human subjects. This methodology is low stress (using appetitive, rather than aversive reinforcement), has high translational potential, and lends itself to a high degree of standardisation and throughput. Applications include the study of cognition in rodent models of psychiatric and neurodegenerative diseases (e.g., Alzheimer’s disease, schizophrenia, Huntington’s disease, frontotemporal dementia), and characterisation of the role of select brain regions, neurotransmitter systems and genes in rodents. This protocol describes how to perform four touchscreen assays of learning and memory: Visual Discrimination, Object-Location Paired-Associates Learning, Visuomotor Conditional Learning and Autoshaping. It is accompanied by two further protocols using the touchscreen platform to assess executive function, working memory and pattern separation. PMID:24051959

  7. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  8. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.

  9. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems

    NASA Astrophysics Data System (ADS)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-01

    We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  10. Performance of asynchronous transfer mode (ATM) local area and wide area networks for medical imaging transmission in clinical environment.

    PubMed

    Huang, H K; Wong, A W; Zhu, X

    1997-01-01

    Asynchronous transfer mode (ATM) technology emerges as a leading candidate for medical image transmission in both local area network (LAN) and wide area network (WAN) applications. This paper describes the performance of an ATM LAN and WAN network at the University of California, San Francisco. The measurements were obtained using an intensive care unit (ICU) server connecting to four image workstations (WS) at four different locations of a hospital-integrated picture archiving and communication system (HI-PACS) in a daily regular clinical environment. Four types of performance were evaluated: magnetic disk-to-disk, disk-to-redundant array of inexpensive disks (RAID), RAID-to-memory, and memory-to-memory. Results demonstrate that the transmission rate between two workstations can reach 5-6 Mbytes/s from RAID-to-memory, and 8-10 Mbytes/s from memory-to-memory. When the server has to send images to all four workstations simultaneously, the transmission rate to each WS is about 4 Mbytes/s. Both situations are adequate for radiologic image communications for picture archiving and communication systems (PACS) and teleradiology applications.

  11. Long-term associative learning predicts verbal short-term memory performance.

    PubMed

    Jones, Gary; Macken, Bill

    2018-02-01

    Studies using tests such as digit span and nonword repetition have implicated short-term memory across a range of developmental domains. Such tests ostensibly assess specialized processes for the short-term manipulation and maintenance of information that are often argued to enable long-term learning. However, there is considerable evidence for an influence of long-term linguistic learning on performance in short-term memory tasks that brings into question the role of a specialized short-term memory system separate from long-term knowledge. Using natural language corpora, we show experimentally and computationally that performance on three widely used measures of short-term memory (digit span, nonword repetition, and sentence recall) can be predicted from simple associative learning operating on the linguistic environment to which a typical child may have been exposed. The findings support the broad view that short-term verbal memory performance reflects the application of long-term language knowledge to the experimental setting.

  12. Histopathologic subtype of hippocampal sclerosis and episodic memory performance before and after temporal lobectomy for epilepsy.

    PubMed

    Saghafi, Shahram; Ferguson, Lisa; Hogue, Olivia; Gales, Jordan M; Prayson, Richard; Busch, Robyn M

    2018-04-01

    The International League Against Epilepsy (ILAE) proposed a classification system for hippocampal sclerosis (HS) based on location and extent of hippocampal neuron loss. The literature debates the usefulness of this classification system when studying memory in people with temporal lobe epilepsy (TLE) and determining memory outcome after temporal lobe resection (TLR). This study further explores the relationship between HS ILAE subtypes and episodic memory performance in patients with TLE and examines memory outcomes after TLR. This retrospective study identified 213 patients with TLE who underwent TLR and had histopathological evidence of HS (HS ILAE type 1a = 92; type 1b = 103; type 2 = 18). Patients completed the Wechsler Memory Scale-3rd Edition prior to surgery, and 78% of patients had postoperative scores available. Linear regressions examined differences in preoperative memory scores as a function of pathology classification, controlling for potential confounders. Fisher's exact tests were used to compare pathology subtypes on the magnitude of preoperative memory impairment and the proportion of patients who experienced clinically meaningful postoperative memory decline. Individuals with HS ILAE type 2 demonstrated better preoperative verbal memory performance than patients with HS ILAE type 1; however, individual data revealed verbal and visual episodic memory impairments in many patients with HS ILAE type 2. The base rate of postoperative memory decline was similar among all 3 pathology groups. This is the largest reported overall sample and the largest subset of patients with HS ILAE type 2. Group data suggest that patients with HS ILAE type 2 perform better on preoperative memory measures, but individually there were no differences in the magnitude of memory impairment. Following surgery, there were no statistically significant differences between groups in the proportion of patients who declined. Future research should focus on quantitative measurements of hippocampal neuronal loss, and multicenter collaboration is encouraged. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  13. The Influence of Genetic and Environmental Factors among MDMA Users in Cognitive Performance

    PubMed Central

    Cuyàs, Elisabet; Verdejo-García, Antonio; Fagundo, Ana Beatriz; Khymenets, Olha; Rodríguez, Joan; Cuenca, Aida; de Sola Llopis, Susana; Langohr, Klaus; Peña-Casanova, Jordi; Torrens, Marta; Martín-Santos, Rocío; Farré, Magí; de la Torre, Rafael

    2011-01-01

    This study is aimed to clarify the association between MDMA cumulative use and cognitive dysfunction, and the potential role of candidate genetic polymorphisms in explaining individual differences in the cognitive effects of MDMA. Gene polymorphisms related to reduced serotonin function, poor competency of executive control and memory consolidation systems, and high enzymatic activity linked to bioactivation of MDMA to neurotoxic metabolites may contribute to explain variations in the cognitive impact of MDMA across regular users of this drug. Sixty ecstasy polydrug users, 110 cannabis users and 93 non-drug users were assessed using cognitive measures of Verbal Memory (California Verbal Learning Test, CVLT), Visual Memory (Rey-Osterrieth Complex Figure Test, ROCFT), Semantic Fluency, and Perceptual Attention (Symbol Digit Modalities Test, SDMT). Participants were also genotyped for polymorphisms within the 5HTT, 5HTR2A, COMT, CYP2D6, BDNF, and GRIN2B genes using polymerase chain reaction and TaqMan polymerase assays. Lifetime cumulative MDMA use was significantly associated with poorer performance on visuospatial memory and perceptual attention. Heavy MDMA users (>100 tablets lifetime use) interacted with candidate gene polymorphisms in explaining individual differences in cognitive performance between MDMA users and controls. MDMA users carrying COMT val/val and SERT s/s had poorer performance than paired controls on visuospatial attention and memory, and MDMA users with CYP2D6 ultra-rapid metabolizers performed worse than controls on semantic fluency. Both MDMA lifetime use and gene-related individual differences influence cognitive dysfunction in ecstasy users. PMID:22110616

  14. A shared resource between declarative memory and motor memory.

    PubMed

    Keisler, Aysha; Shadmehr, Reza

    2010-11-03

    The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and nondeclarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/nondeclarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system.

  15. A shared resource between declarative memory and motor memory

    PubMed Central

    Keisler, Aysha; Shadmehr, Reza

    2010-01-01

    The neural systems that support motor adaptation in humans are thought to be distinct from those that support the declarative system. Yet, during motor adaptation changes in motor commands are supported by a fast adaptive process that has important properties (rapid learning, fast decay) that are usually associated with the declarative system. The fast process can be contrasted to a slow adaptive process that also supports motor memory, but learns gradually and shows resistance to forgetting. Here we show that after people stop performing a motor task, the fast motor memory can be disrupted by a task that engages declarative memory, but the slow motor memory is immune from this interference. Furthermore, we find that the fast/declarative component plays a major role in the consolidation of the slow motor memory. Because of the competitive nature of declarative and non-declarative memory during consolidation, impairment of the fast/declarative component leads to improvements in the slow/non-declarative component. Therefore, the fast process that supports formation of motor memory is not only neurally distinct from the slow process, but it shares critical resources with the declarative memory system. PMID:21048140

  16. Building Intrusion Detection with a Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Wälchli, Markus; Braun, Torsten

    This paper addresses the detection and reporting of abnormal building access with a wireless sensor network. A common office room, offering space for two working persons, has been monitored with ten sensor nodes and a base station. The task of the system is to report suspicious office occupation such as office searching by thieves. On the other hand, normal office occupation should not throw alarms. In order to save energy for communication, the system provides all nodes with some adaptive short-term memory. Thus, a set of sensor activation patterns can be temporarily learned. The local memory is implemented as an Adaptive Resonance Theory (ART) neural network. Unknown event patterns detected on sensor node level are reported to the base station, where the system-wide anomaly detection is performed. The anomaly detector is lightweight and completely self-learning. The system can be run autonomously or it could be used as a triggering system to turn on an additional high-resolution system on demand. Our building monitoring system has proven to work reliably in different evaluated scenarios. Communication costs of up to 90% could be saved compared to a threshold-based approach without local memory.

  17. Contralateral Delay Activity Tracks Fluctuations in Working Memory Performance.

    PubMed

    Adam, Kirsten C S; Robison, Matthew K; Vogel, Edward K

    2018-01-08

    Neural measures of working memory storage, such as the contralateral delay activity (CDA), are powerful tools in working memory research. CDA amplitude is sensitive to working memory load, reaches an asymptote at known behavioral limits, and predicts individual differences in capacity. An open question, however, is whether neural measures of load also track trial-by-trial fluctuations in performance. Here, we used a whole-report working memory task to test the relationship between CDA amplitude and working memory performance. If working memory failures are due to decision-based errors and retrieval failures, CDA amplitude would not differentiate good and poor performance trials when load is held constant. If failures arise during storage, then CDA amplitude should track both working memory load and trial-by-trial performance. As expected, CDA amplitude tracked load (Experiment 1), reaching an asymptote at three items. In Experiment 2, we tracked fluctuations in trial-by-trial performance. CDA amplitude was larger (more negative) for high-performance trials compared with low-performance trials, suggesting that fluctuations in performance were related to the successful storage of items. During working memory failures, participants oriented their attention to the correct side of the screen (lateralized P1) and maintained covert attention to the correct side during the delay period (lateralized alpha power suppression). Despite the preservation of attentional orienting, we found impairments consistent with an executive attention theory of individual differences in working memory capacity; fluctuations in executive control (indexed by pretrial frontal theta power) may be to blame for storage failures.

  18. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  19. Automatic weld torch guidance control system

    NASA Technical Reports Server (NTRS)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  20. Memory recall in arousing situations – an emotional von Restorff effect?

    PubMed Central

    Wiswede, Daniel; Rüsseler, Jascha; Hasselbach, Simone; Münte, Thomas F

    2006-01-01

    Background Previous research has demonstrated a relationship between memory recall and P300 amplitude in list learning tasks, but the variables mediating this P300-recall relationship are not well understood. In the present study, subjects were required to recall items from lists consisting of 12 words, which were presented in front of pictures taken from the IAPS collection. One word per list is made distinct either by font color or by a highly arousing background IAPS picture. This isolation procedure was first used by von Restorff. Brain potentials were recorded during list presentation. Results Recall performance was enhanced for color but not for emotional isolates. Event-related brain potentials (ERP) showed a more positive P300-component for recalled non-isolated words and color-isolated words, compared to the respective non-remembered words, but not for words isolated by arousing background. Conclusion Our findings indicate that it is crucial to take emotional mediator variables into account, when using the P300 to predict later recall. Highly arousing environments might force the cognitive system to interrupt rehearsal processes in working memory, which might benefit transfer into other, more stable memory systems. The impact of attention-capturing properties of arousing background stimuli is also discussed. PMID:16863589

  1. Fractional Steps methods for transient problems on commodity computer architectures

    NASA Astrophysics Data System (ADS)

    Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

    2008-12-01

    Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

  2. Investigation of High-k Dielectrics and Metal Gate Electrodes for Non-volatile Memory Applications

    NASA Astrophysics Data System (ADS)

    Jayanti, Srikant

    Due to the increasing demand of non-volatile flash memories in the portable electronics, the device structures need to be scaled down drastically. However, the scalability of traditional floating gate structures beyond 20 nm NAND flash technology node is uncertain. In this regard, the use of metal gates and high-k dielectrics as the gate and interpoly dielectrics respectively, seem to be promising substitutes in order to continue the flash scaling beyond 20nm. Furthermore, research of novel memory structures to overcome the scaling challenges need to be explored. Through this work, the use of high-k dielectrics as IPDs in a memory structure has been studied. For this purpose, IPD process optimization and barrier engineering were explored to determine and improve the memory performance. Specifically, the concept of high-k / low-k barrier engineering was studied in corroboration with simulations. In addition, a novel memory structure comprising a continuous metal floating gate was investigated in combination with high-k blocking oxides. Integration of thin metal FGs and high-k dielectrics into a dual floating gate memory structure to result in both volatile and non-volatile modes of operation has been demonstrated, for plausible application in future unified memory architectures. The electrical characterization was performed on simple MIS/MIM and memory capacitors, fabricated through CMOS compatible processes. Various analytical characterization techniques were done to gain more insight into the material behavior of the layers in the device structure. In the first part of this study, interfacial engineering was investigated by exploring La2O3 as SiO2 scavenging layer. Through the silicate formation, the consumption of low-k SiO2 was controlled and resulted in a significant improvement in dielectric leakage. The performance improvement was also gauged through memory capacitors. In the second part of the study, a novel memory structure consisting of continuous metal FG in the form of PVD TaN was investigated along with high-k blocking dielectric. The material properties of TaN metal and high-k / low-k dielectric engineering were systematically studied. And the resulting memory structures exhibit excellent memory characteristics and scalability of the metal FG down to ˜1nm, which is promising in order to reduce the unwanted FG-FG interferences. In the later part of the study, the thermal stability of the combined stack was examined and various approaches to improve the stability and understand the cause of instability were explored. The performance of the high-k IPD metal FG memory structure was observed to degrade with higher annealing conditions and the deteriorated behavior was attributed to the leakage instability of the high-k /TaN capacitor. While the degradation is pronounced in both MIM and MIS capacitors, a higher leakage increment was seen in MIM, which was attributed to the higher degree of dielectric crystallization. In an attempt to improve the thermal stability, the trade-off in using amorphous interlayers to reduce the enhanced dielectric crystallization on metal was highlighted. Also, the effect of oxygen vacancies and grain growth on the dielectric leakage was studied through a multi-deposition-multi-anneal technique. Multi step deposition and annealing in a more electronegative ambient was observed to have a positive impact on the dielectric performance.

  3. The role of short-term memory impairment in nonword repetition, real word repetition, and nonword decoding: A case study.

    PubMed

    Peter, Beate

    2018-01-01

    In a companion study, adults with dyslexia and adults with a probable history of childhood apraxia of speech showed evidence of difficulty with processing sequential information during nonword repetition, multisyllabic real word repetition and nonword decoding. Results suggested that some errors arose in visual encoding during nonword reading, all levels of processing but especially short-term memory storage/retrieval during nonword repetition, and motor planning and programming during complex real word repetition. To further investigate the role of short-term memory, a participant with short-term memory impairment (MI) was recruited. MI was confirmed with poor performance during a sentence repetition and three nonword repetition tasks, all of which have a high short-term memory load, whereas typical performance was observed during tests of reading, spelling, and static verbal knowledge, all with low short-term memory loads. Experimental results show error-free performance during multisyllabic real word repetition but high counts of sequence errors, especially migrations and assimilations, during nonword repetition, supporting short-term memory as a locus of sequential processing deficit during nonword repetition. Results are also consistent with the hypothesis that during complex real word repetition, short-term memory is bypassed as the word is recognized and retrieved from long-term memory prior to producing the word.

  4. Low lifetime stress exposure is associated with reduced stimulus–response memory

    PubMed Central

    Goldfarb, Elizabeth V.; Shields, Grant S.; Daw, Nathaniel D.; Slavich, George M.; Phelps, Elizabeth A.

    2017-01-01

    Exposure to stress throughout life can cumulatively influence later health, even among young adults. The negative effects of high cumulative stress exposure are well-known, and a shift from episodic to stimulus–response memory has been proposed to underlie forms of psychopathology that are related to high lifetime stress. At the other extreme, effects of very low stress exposure are mixed, with some studies reporting that low stress leads to better outcomes, while others demonstrate that low stress is associated with diminished resilience and negative outcomes. However, the influence of very low lifetime stress exposure on episodic and stimulus–response memory is unknown. Here we use a lifetime stress assessment system (STRAIN) to assess cumulative lifetime stress exposure and measure memory performance in young adults reporting very low and moderate levels of lifetime stress exposure. Relative to moderate levels of stress, very low levels of lifetime stress were associated with reduced use and retention (24 h later) of stimulus–response (SR) associations, and a higher likelihood of using context memory. Further, computational modeling revealed that participants with low levels of stress exhibited worse expression of memory for SR associations than those with moderate stress. These results demonstrate that very low levels of stress exposure can have negative effects on cognition. PMID:28298555

  5. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  6. Fault tolerant onboard packet switch architecture for communication satellites: Shared memory per beam approach

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.

    1994-01-01

    The NASA Lewis Research Center is developing a multichannel communication signal processing satellite (MCSPS) system which will provide low data rate, direct to user, commercial communications services. The focus of current space segment developments is a flexible, high-throughput, fault tolerant onboard information switching processor. This information switching processor (ISP) is a destination-directed packet switch which performs both space and time switching to route user information among numerous user ground terminals. Through both industry study contracts and in-house investigations, several packet switching architectures were examined. A contention-free approach, the shared memory per beam architecture, was selected for implementation. The shared memory per beam architecture, fault tolerance insertion, implementation, and demonstration plans are described.

  7. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  8. Williams Syndrome and Memory: A Neuroanatomic and Cognitive Approach

    ERIC Educational Resources Information Center

    Sampaio, Adriana; Sousa, Nuno; Fernandez, Montse; Vasconcelos, Cristiana; Shenton, Martha E.; Goncalves, Oscar F.

    2010-01-01

    Williams Syndrome (WS) is described as displaying a dissociation within memory systems. As the integrity of hippocampal formation (HF) is determinant for memory performance, we examined HF volumes and its association with memory measures in a group of WS and in a typically development group. A significantly reduced intracranial content was found…

  9. Cooperative Data Sharing: Simple Support for Clusters of SMP Nodes

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Balley, David H. (Technical Monitor)

    1997-01-01

    Libraries like PVM and MPI send typed messages to allow for heterogeneous cluster computing. Lower-level libraries, such as GAM, provide more efficient access to communication by removing the need to copy messages between the interface and user space in some cases. still lower-level interfaces, such as UNET, get right down to the hardware level to provide maximum performance. However, these are all still interfaces for passing messages from one process to another, and have limited utility in a shared-memory environment, due primarily to the fact that message passing is just another term for copying. This drawback is made more pertinent by today's hybrid architectures (e.g. clusters of SMPs), where it is difficult to know beforehand whether two communicating processes will share memory. As a result, even portable language tools (like HPF compilers) must either map all interprocess communication, into message passing with the accompanying performance degradation in shared memory environments, or they must check each communication at run-time and implement the shared-memory case separately for efficiency. Cooperative Data Sharing (CDS) is a single user-level API which abstracts all communication between processes into the sharing and access coordination of memory regions, in a model which might be described as "distributed shared messages" or "large-grain distributed shared memory". As a result, the user programs to a simple latency-tolerant abstract communication specification which can be mapped efficiently to either a shared-memory or message-passing based run-time system, depending upon the available architecture. Unlike some distributed shared memory interfaces, the user still has complete control over the assignment of data to processors, the forwarding of data to its next likely destination, and the queuing of data until it is needed, so even the relatively high latency present in clusters can be accomodated. CDS does not require special use of an MMU, which can add overhead to some DSM systems, and does not require an SPMD programming model. unlike some message-passing interfaces, CDS allows the user to implement efficient demand-driven applications where processes must "fight" over data, and does not perform copying if processes share memory and do not attempt concurrent writes. CDS also supports heterogeneous computing, dynamic process creation, handlers, and a very simple thread-arbitration mechanism. Additional support for array subsections is currently being considered. The CDS1 API, which forms the kernel of CDS, is built primarily upon only 2 communication primitives, one process initiation primitive, and some data translation (and marshalling) routines, memory allocation routines, and priority control routines. The entire current collection of 28 routines provides enough functionality to implement most (or all) of MPI 1 and 2, which has a much larger interface consisting of hundreds of routines. still, the API is small enough to consider integrating into standard os interfaces for handling inter-process communication in a network-independent way. This approach would also help to solve many of the problems plaguing other higher-level standards such as MPI and PVM which must, in some cases, "play OS" to adequately address progress and process control issues. The CDS2 API, a higher level of interface roughly equivalent in functionality to MPI and to be built entirely upon CDS1, is still being designed. It is intended to add support for the equivalent of communicators, reduction and other collective operations, process topologies, additional support for process creation, and some automatic memory management. CDS2 will not exactly match MPI, because the copy-free semantics of communication from CDS1 will be supported. CDS2 application programs will be free to carefully also use CDS1. CDS1 has been implemented on networks of workstations running unmodified Unix-based operating systems, using UDP/IP and vendor-supplied high- performance locks. Although its inter-node performance is currently unimpressive due to rudimentary implementation technique, it even now outperforms highly-optimized MPI implementation on intra-node communication due to its support for non-copy communication. The similarity of the CDS1 architecture to that of other projects such as UNET and TRAP suggests that the inter-node performance can be increased significantly to surpass MPI or PVM, and it may be possible to migrate some of its functionality to communication controllers.

  10. Optical memory development. Volume 1: prototype memory system

    NASA Technical Reports Server (NTRS)

    Cosentino, L. S.; Mezrich, R. S.; Nagle, E. M.; Stewart, W. C.; Wendt, F. S.

    1972-01-01

    The design, development, and implementation of a prototype, partially populated, million bit read-write holographic memory system using state-of-the-art components are described. The system employs an argon ion laser, acoustooptic beam deflectors, a holographic beam splitter (hololens), a nematic liquid crystal page composer, a photoconductor-thermoplastic erasable storage medium, a silicon P-I-N photodiode array, with lenses and electronics of both conventional and custom design. Operation of the prototype memory system was successfully demonstrated. Careful attention is given to the analysis from which the design criteria were developed. Specifications for the major components are listed, along with the details of their construction and performance. The primary conclusion resulting from this program is that the basic principles of read-write holographic memory system are well understood and are reducible to practice.

  11. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

    NASA Astrophysics Data System (ADS)

    Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan

    2018-01-01

    Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

  12. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  13. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  14. Effects of language dominance on item and order memory in free recall, serial recall and order reconstruction.

    PubMed

    Francis, Wendy S; Baca, Yuzeth

    2014-01-01

    Spanish-English bilinguals (N = 144) performed free recall, serial recall and order reconstruction tasks in both English and Spanish. Long-term memory for both item and order information was worse in the less fluent language (L2) than in the more fluent language (L1). Item scores exhibited a stronger disadvantage for the L2 in serial recall than in free recall. Relative order scores were lower in the L2 for all three tasks, but adjusted scores for free and serial recall were equivalent across languages. Performance of English-speaking monolinguals (N = 72) was comparable to bilingual performance in the L1, except that monolinguals had higher adjusted order scores in free recall. Bilingual performance patterns in the L2 were consistent with the established effects of concurrent task performance on these memory tests, suggesting that the cognitive resources required for processing words in the L2 encroach on resources needed to commit item and order information to memory. These findings are also consistent with a model in which item memory is connected to the language system, order information is processed by separate mechanisms and attention can be allocated differentially to these two systems.

  15. Implementing Access to Data Distributed on Many Processors

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.

  16. Memorial Hermann: high reliability from board to bedside.

    PubMed

    Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire

    2013-06-01

    In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.

  17. Integrated semiconductor-magnetic random access memory system

    NASA Technical Reports Server (NTRS)

    Katti, Romney R. (Inventor); Blaes, Brent R. (Inventor)

    2001-01-01

    The present disclosure describes a non-volatile magnetic random access memory (RAM) system having a semiconductor control circuit and a magnetic array element. The integrated magnetic RAM system uses CMOS control circuit to read and write data magnetoresistively. The system provides a fast access, non-volatile, radiation hard, high density RAM for high speed computing.

  18. Neural network based feed-forward high density associative memory

    NASA Technical Reports Server (NTRS)

    Daud, T.; Moopenn, A.; Lamb, J. L.; Ramesham, R.; Thakoor, A. P.

    1987-01-01

    A novel thin film approach to neural-network-based high-density associative memory is described. The information is stored locally in a memory matrix of passive, nonvolatile, binary connection elements with a potential to achieve a storage density of 10 to the 9th bits/sq cm. Microswitches based on memory switching in thin film hydrogenated amorphous silicon, and alternatively in manganese oxide, have been used as programmable read-only memory elements. Low-energy switching has been ascertained in both these materials. Fabrication and testing of memory matrix is described. High-speed associative recall approaching 10 to the 7th bits/sec and high storage capacity in such a connection matrix memory system is also described.

  19. Short-term memory and dual task performance

    NASA Technical Reports Server (NTRS)

    Regan, J. E.

    1982-01-01

    Two hypotheses concerning the way in which short-term memory interacts with another task in a dual task situation are considered. It is noted that when two tasks are combined, the activity of controlling and organizing performance on both tasks simultaneously may compete with either task for a resource; this resource may be space in a central mechanism or general processing capacity or it may be some task-specific resource. If a special relationship exists between short-term memory and control, especially if there is an identity relationship between short-term and a central controlling mechanism, then short-term memory performance should show a decrement in a dual task situation. Even if short-term memory does not have any particular identity with a controlling mechanism, but both tasks draw on some common resource or resources, then a tradeoff between the two tasks in allocating resources is possible and could be reflected in performance. The persistent concurrence cost in memory performance in these experiments suggests that short-term memory may have a unique status in the information processing system.

  20. Processing efficiency theory in children: working memory as a mediator between trait anxiety and academic performance.

    PubMed

    Owens, Matthew; Stevenson, Jim; Norgate, Roger; Hadwin, Julie A

    2008-10-01

    Working memory skills are positively associated with academic performance. In contrast, high levels of trait anxiety are linked with educational underachievement. Based on Eysenck and Calvo's (1992) processing efficiency theory (PET), the present study investigated whether associations between anxiety and educational achievement were mediated via poor working memory performance. Fifty children aged 11-12 years completed verbal (backwards digit span; tapping the phonological store/central executive) and spatial (Corsi blocks; tapping the visuospatial sketchpad/central executive) working memory tasks. Trait anxiety was measured using the State-Trait Anxiety Inventory for Children. Academic performance was assessed using school administered tests of reasoning (Cognitive Abilities Test) and attainment (Standard Assessment Tests). The results showed that the association between trait anxiety and academic performance was significantly mediated by verbal working memory for three of the six academic performance measures (math, quantitative and non-verbal reasoning). Spatial working memory did not significantly mediate the relationship between trait anxiety and academic performance. On average verbal working memory accounted for 51% of the association between trait anxiety and academic performance, while spatial working memory only accounted for 9%. The findings indicate that PET is a useful framework to assess the impact of children's anxiety on educational achievement.

  1. Acute Exercise and Motor Memory Consolidation: The Role of Exercise Timing

    PubMed Central

    Christiansen, Lasse; Roig, Marc

    2016-01-01

    High intensity aerobic exercise amplifies offline gains in procedural memory acquired during motor practice. This effect seems to be evident when exercise is placed immediately after acquisition, during the first stages of memory consolidation, but the importance of temporal proximity of the exercise bout used to stimulate improvements in procedural memory is unknown. The effects of three different temporal placements of high intensity exercise were investigated following visuomotor skill acquisition on the retention of motor memory in 48 young (24.0 ± 2.5 yrs), healthy male subjects randomly assigned to one of four groups either performing a high intensity (90% Maximal Power Output) exercise bout at 20 min (EX90), 1 h (EX90+1), 2 h (EX90+2) after acquisition or rested (CON). Retention tests were performed at 1 d (R1) and 7 d (R7). At R1 changes in performance scores after acquisition were greater for EX90 than CON (p < 0.001) and EX90+2 (p = 0.001). At R7 changes in performance scores for EX90, EX90+1, and EX90+2 were higher than CON (p < 0.001, p = 0.008, and p = 0.008, resp.). Changes for EX90 at R7 were greater than EX90+2 (p = 0.049). Exercise-induced improvements in procedural memory diminish as the temporal proximity of exercise from acquisition is increased. Timing of exercise following motor practice is important for motor memory consolidation. PMID:27446616

  2. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  3. A high-speed DAQ framework for future high-level trigger and event building clusters

    NASA Astrophysics Data System (ADS)

    Caselle, M.; Ardila Perez, L. E.; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.

    2017-03-01

    Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using "DirectGMA (AMD)" and "GPUDirect (NVIDIA)" technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.

  4. Frequency set on systems

    NASA Astrophysics Data System (ADS)

    Wilby, W. A.; Brett, A. R. H.

    Frequency set on techniques used in ECM applications include repeater jammers, frequency memory loops (RF and optical), coherent digital RF memories, and closed loop VCO set on systems. Closed loop frequency set on systems using analog phase and frequency locking are considered to have a number of cost and performance advantages. Their performance is discussed in terms of frequency accuracy, bandwidth, locking time, stability, and simultaneous signals. Some experimental results are presented which show typical locking performance. Future ECM systems might require a response to very short pulses. Acoustooptic and fiber-optic pulse stretching techniques can be used to meet such requirements.

  5. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Caubet, Jordi; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In this paper we describe how to apply powerful performance analysis techniques to understand the behavior of multilevel parallel applications. We use the Paraver/OMPItrace performance analysis system for our study. This system consists of two major components: The OMPItrace dynamic instrumentation mechanism, which allows the tracing of processes and threads and the Paraver graphical user interface for inspection and analyses of the generated traces. We describe how to use the system to conduct a detailed comparative study of a benchmark code implemented in five different programming paradigms applicable for shared memory

  6. Programming distributed memory architectures using Kali

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, in part because of the relatively low level of current programming environments for such machines. A new programming environment is presented, Kali, which provides a global name space and allows direct access to remote data values. In order to retain efficiency, Kali provides a system on annotations, allowing the user to control those aspects of the program critical to performance, such as data distribution and load balancing. The primitives and constructs provided by the language is described, and some of the issues raised in translating a Kali program for execution on distributed memory systems are also discussed.

  7. Individual differences in associative memory among older adults explained by hippocampal subfield structure and function.

    PubMed

    Carr, Valerie A; Bernstein, Jeffrey D; Favila, Serra E; Rutt, Brian K; Kerchner, Geoffrey A; Wagner, Anthony D

    2017-11-07

    Older adults experience impairments in episodic memory, ranging from mild to clinically significant. Given the critical role of the medial temporal lobe (MTL) in episodic memory, age-related changes in MTL structure and function may partially account for individual differences in memory. Using ultra-high-field 7T structural MRI and high-resolution 3T functional MRI (hr-fMRI), we evaluated MTL subfield thickness and function in older adults representing a spectrum of cognitive health. Participants performed an associative memory task during hr-fMRI in which they encoded and later retrieved face-name pairs. Motivated by prior research, we hypothesized that differences in performance would be explained by the following: ( i ) entorhinal cortex (ERC) and CA1 apical neuropil layer [CA1-stratum radiatum lacunosum moleculare (SRLM)] thickness, and ( ii ) activity in ERC and the dentate gyrus (DG)/CA3 region. Regression analyses revealed that this combination of factors significantly accounted for variability in memory performance. Among these metrics, CA1-SRLM thickness was positively associated with memory, whereas DG/CA3 retrieval activity was negatively associated with memory. Furthermore, including structural and functional metrics in the same model better accounted for performance than did single-modality models. These results advance the understanding of how independent but converging influences of both MTL subfield structure and function contribute to age-related memory impairment, complementing findings in the rodent and human postmortem literatures.

  8. Plated wire memory subsystem

    NASA Technical Reports Server (NTRS)

    Carpenter, K. H.

    1974-01-01

    The design, construction, and test history of a 4096 word by 18 bit random access NDRO Plated Wire Memory for use in conjunction with a spacecraft input/output and central processing unit is reported. A technical and functional description is given along with diagrams illustrating layout and systems operation. Test data is shown on the procedures and results of system level and memory stack testing, and hybrid circuit screening. A comparison of the most significant physical and performance characteristics of the memory unit versus the specified requirements is also included.

  9. Influence of transactive memory on perceived performance, job satisfaction and identification in anaesthesia teams.

    PubMed

    Michinov, E; Olivier-Chiron, E; Rusch, E; Chiron, B

    2008-03-01

    There is an increasing awareness in the medical community that human factors are involved in effectiveness of anaesthesia teams. Communication and coordination between physicians and nurses seems to play a crucial role in maintaining a good level of performance under time pressure, particularly for anaesthesia teams, who are confronted with uncertainty, rapid changes in the environment, and multi-tasking. The aim of this study was to examine the relationship between a specific form of implicit coordination--the transactive memory system--and perceptions of team effectiveness and work attitudes such as job satisfaction and team identification. A cross-sectional study was conducted among 193 nurse and physician anaesthetists from eight French public hospitals. The questionnaire included some measures of transactive memory system (coordination, specialization, and credibility components), perception of team effectiveness, and work attitudes (Minnesota Job Satisfaction Questionnaire, team identification scale). The questionnaire was designed to be filled anonymously, asking only biographical data relating to sex, age, status, and tenure. Hierarchical multiple regression analyses revealed as predicted that transactive memory system predicted members' perceptions of team effectiveness, and also affective outcomes such as job satisfaction and team identification. Moreover, the results demonstrated that transactive memory processes, and especially the coordination component, were a better predictor of teamwork perceptions than socio-demographic (i.e. gender or status) or contextual variables (i.e. tenure and size of team). These findings provided empirical evidence of the existence of a transactive memory system among real anaesthesia teams, and highlight the need to investigate whether transactive memory is actually linked with objective measures of performance.

  10. Modulation of working memory updating: Does long-term memory lexical association matter?

    PubMed

    Artuso, Caterina; Palladino, Paola

    2016-02-01

    The aim of the present study was to investigate how working memory updating for verbal material is modulated by enduring properties of long-term memory. Two coexisting perspectives that account for the relation between long-term representation and short-term performance were addressed. First, evidence suggests that performance is more closely linked to lexical properties, that is, co-occurrences within the language. Conversely, other evidence suggests that performance is linked more to long-term representations which do not entail lexical/linguistic representations. Our aim was to investigate how these two kinds of long-term memory associations (i.e., lexical or nonlexical) modulate ongoing working memory activity. Therefore, we manipulated (between participants) the strength of the association in letters based on either frequency of co-occurrences (lexical) or contiguity along the sequence of the alphabet (nonlexical). Results showed a cost in working memory updating for strongly lexically associated stimuli only. Our findings advance knowledge of how lexical long-term memory associations between consonants affect working memory updating and, in turn, contribute to the study of factors which impact the updating process across memory systems.

  11. Integration of lead-free ferroelectric on HfO2/Si (100) for high performance non-volatile memory applications

    PubMed Central

    Kundu, Souvik; Maurya, Deepam; Clavel, Michael; Zhou, Yuan; Halder, Nripendra N.; Hudait, Mantu K.; Banerji, Pallab; Priya, Shashank

    2015-01-01

    We introduce a novel lead-free ferroelectric thin film (1-x)BaTiO3-xBa(Cu1/3Nb2/3)O3 (x = 0.025) (BT-BCN) integrated on to HfO2 buffered Si for non-volatile memory (NVM) applications. Piezoelectric force microscopy (PFM), x-ray diffraction, and high resolution transmission electron microscopy were employed to establish the ferroelectricity in BT-BCN thin films. PFM study reveals that the domains reversal occurs with 180° phase change by applying external voltage, demonstrating its effectiveness for NVM device applications. X-ray photoelectron microscopy was used to investigate the band alignments between atomic layer deposited HfO2 and pulsed laser deposited BT-BCN films. Programming and erasing operations were explained on the basis of band-alignments. The structure offers large memory window, low leakage current, and high and low capacitance values that were easily distinguishable even after ~106 s, indicating strong charge storage potential. This study explains a new approach towards the realization of ferroelectric based memory devices integrated on Si platform and also opens up a new possibility to embed the system within current complementary metal-oxide-semiconductor processing technology. PMID:25683062

  12. Electrically and Optically Readable Light Emitting Memories

    PubMed Central

    Chang, Che-Wei; Tan, Wei-Chun; Lu, Meng-Lin; Pan, Tai-Chun; Yang, Ying-Jay; Chen, Yang-Fang

    2014-01-01

    Electrochemical metallization memories based on redox-induced resistance switching have been considered as the next-generation electronic storage devices. However, the electronic signals suffer from the interconnect delay and the limited reading speed, which are the major obstacles for memory performance. To solve this problem, here we demonstrate the first attempt of light-emitting memory (LEM) that uses SiO2 as the resistive switching material in tandem with graphene-insulator-semiconductor (GIS) light-emitting diode (LED). By utilizing the excellent properties of graphene, such as high conductivity, high robustness and high transparency, our proposed LEM enables data communication via electronic and optical signals simultaneously. Both the bistable light-emission state and the resistance switching properties can be attributed to the conducting filament mechanism. Moreover, on the analysis of current-voltage characteristics, we further confirm that the electroluminescence signal originates from the carrier tunneling, which is quite different from the standard p-n junction model. We stress here that the newly developed LEM device possesses a simple structure with mature fabrication processes, which integrates advantages of all composed materials and can be extended to many other material systems. It should be able to attract academic interest as well as stimulate industrial application. PMID:24894723

  13. Incidental Memory Encoding Assessed with Signal Detection Theory and Functional Magnetic Resonance Imaging (fMRI).

    PubMed

    Clemens, Benjamin; Regenbogen, Christina; Koch, Kathrin; Backes, Volker; Romanczuk-Seiferth, Nina; Pauly, Katharina; Shah, N Jon; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2015-01-01

    In functional magnetic resonance imaging (fMRI) studies that apply a "subsequent memory" approach, successful encoding is indicated by increased fMRI activity during the encoding phase for hits vs. misses, in areas underlying memory encoding such as the hippocampal formation. Signal-detection theory (SDT) can be used to analyze memory-related fMRI activity as a function of the participant's memory trace strength (d(')). The goal of the present study was to use SDT to examine the relationship between fMRI activity during incidental encoding and participants' recognition performance. To implement a new approach, post-experimental group assignment into High- or Low Performers (HP or LP) was based on 29 healthy participants' recognition performance, assessed with SDT. The analyses focused on the interaction between the factors group (HP vs. LP) and recognition performance (hits vs. misses). A whole-brain analysis revealed increased activation for HP vs. LP during incidental encoding for remembered vs. forgotten items (hits > misses) in the insula/temporo-parietal junction (TPJ) and the fusiform gyrus (FFG). Parameter estimates in these regions exhibited a significant positive correlation with d('). As these brain regions are highly relevant for salience detection (insula), stimulus-driven attention (TPJ), and content-specific processing of mnemonic stimuli (FFG), we suggest that HPs' elevated memory performance was associated with enhanced attentional and content-specific sensory processing during the encoding phase. We provide first correlative evidence that encoding-related activity in content-specific sensory areas and content-independent attention and salience detection areas influences memory performance in a task with incidental encoding of facial stimuli. Based on our findings, we discuss whether the aforementioned group differences in brain activity during incidental encoding might constitute the basis of general differences in memory performance between HP and LP.

  14. Loganin enhances long-term potentiation and recovers scopolamine-induced learning and memory impairments.

    PubMed

    Hwang, Eun-Sang; Kim, Hyun-Bum; Lee, Seok; Kim, Min-Ji; Lee, Sung-Ok; Han, Seung-Moo; Maeng, Sungho; Park, Ji-Ho

    2017-03-15

    Although the incidence rate of dementia is rapidly growing in the aged population, therapeutic and preventive reagents are still suboptimal. Various model systems are used for the development of such reagents in which scopolamine is one of the favorable pharmacological tools widely applied. Loganin is a major iridoid glycoside obtained from Corni fructus (Cornusofficinalis et Zucc) and demonstrated to have anti-inflammatory, anti-tumor and osteoporosis prevention effects. It has also been found to attenuate Aβ-induced inflammatory reactions and ameliorate memory deficits induced by scopolamine. However, there has been limited information available on how loganin affects learning and memory both electrophysiologically and behaviorally. To assess its effect on learning and memory, we investigated the influence of acute loganin administration on long-term potentiation (LTP) using organotypic cultured hippocampal tissues. In addition, we measured the effects of loganin on the behavior performance related to avoidance memory, short-term spatial navigation memory and long-term spatial learning and memory in the passive avoidance, Y-maze, and Morris water maze learning paradigms, respectively. Loganin dose-dependently increased the total activity of fEPSP after high frequency stimulation and attenuated scopolamine-induced blockade of fEPSP in the hippocampal CA1 area. In accordance with these findings, loganin behaviorally attenuated scopolamine-induced shortening of step-through latency in the passive avoidance test, reduced the percent alternation in the Y-maze, and increased memory retention in the Morris water maze test. These results indicate that loganin can effectively block cholinergic muscarinic receptor blockade -induced deterioration of LTP and memory related behavioral performance. Based on these findings, loganin may aid in the prevention and treatment of Alzheimer's disease and learning and memory-deficit disorders in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Implementing An Image Understanding System Architecture Using Pipe

    NASA Astrophysics Data System (ADS)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  16. Learning and Memory Impairments in Patients with Minimal Hepatic Encephalopathy are Associated with Structural and Functional Connectivity Alterations in Hippocampus.

    PubMed

    García-García, Raquel; Cruz-Gómez, Álvaro Javier; Urios, Amparo; Mangas-Losada, Alba; Forn, Cristina; Escudero-García, Desamparados; Kosenko, Elena; Torregrosa, Isidro; Tosca, Joan; Giner-Durán, Remedios; Serra, Miguel Angel; Avila, César; Belloch, Vicente; Felipo, Vicente; Montoliu, Carmina

    2018-06-25

    Patients with minimal hepatic encephalopathy (MHE) show mild cognitive impairment associated with alterations in attentional and executive networks. There are no studies evaluating the relationship between memory in MHE and structural and functional connectivity (FC) changes in the hippocampal system. This study aimed to evaluate verbal learning and long-term memory in cirrhotic patients with (C-MHE) and without MHE (C-NMHE) and healthy controls. We assessed the relationship between alterations in memory and the structural integrity and FC of the hippocampal system. C-MHE patients showed impairments in learning, long-term memory, and recognition, compared to C-NMHE patients and controls. Cirrhotic patients showed reduced fimbria volume compared to controls. Larger volumes in hippocampus subfields were related to better memory performance in C-NMHE patients and controls. C-MHE patients presented lower FC between the L-presubiculum and L-precuneus than C-NMHE patients. Compared to controls, C-MHE patients had reduced FC between L-presubiculum and subiculum seeds and bilateral precuneus, which correlated with cognitive impairment and memory performance. Alterations in the FC of the hippocampal system could contribute to learning and long-term memory impairments in C-MHE patients. This study demonstrates the association between alterations in learning and long-term memory and structural and FC disturbances in hippocampal structures in cirrhotic patients.

  17. Application of shape memory alloy (SMA) spars for aircraft maneuver enhancement

    NASA Astrophysics Data System (ADS)

    Nam, Changho; Chattopadhyay, Aditi; Kim, Youdan

    2002-07-01

    Modern combat aircraft are required to achieve aggressive maneuverability and high agility performance, while maintaining handling qualities over a wide range of flight conditions. Recently, a new adaptive-structural concept called variable stiffness spar is proposed in order to increase the maneuverability of the flexible aircraft. The variable stiffness spar controls wing torsional stiffness to enhance roll performance in the complete flight envelope. However, variable stiffness spar requires the mechanical actuation system in order to rotate the Variable stiffness spar during flight. The mechanical actuation system to rotate variable stiffness spar may cause an additional weight increase. In this paper, we will apply Shape Memory Alloy (SMA) spars for aeroelastic performance enhancement. In order to explore the potential of SMA spar design, roll performance of the composite smart wings will be investigated using ASTROS. Parametric study will be conducted to investigate the SMA spar effects by changing the spar locations and geometry. The results show that with activation of the SMA spar, the roll effectiveness can be increased up to 61% compared with the baseline model.

  18. Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing

    NASA Technical Reports Server (NTRS)

    Dobbs, Carl, Sr.

    2012-01-01

    A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software

  19. Age-related individual variability in memory performance is associated with amygdala-hippocampal circuit function and emotional pattern separation.

    PubMed

    Leal, Stephanie L; Noche, Jessica A; Murray, Elizabeth A; Yassa, Michael A

    2017-01-01

    While aging is generally associated with episodic memory decline, not all older adults exhibit memory loss. Furthermore, emotional memories are not subject to the same extent of forgetting and appear preserved in aging. We conducted high-resolution fMRI during a task involving pattern separation of emotional information in older adults with and without age-related memory impairment (characterized by performance on a word-list learning task: low performers: LP vs. high performers: HP). We found signals consistent with emotional pattern separation in hippocampal dentate (DG)/CA3 in HP but not in LP individuals, suggesting a deficit in emotional pattern separation. During false recognition, we found increased DG/CA3 activity in LP individuals, suggesting that hyperactivity may be associated with overgeneralization. We additionally observed a selective deficit in basolateral amygdala-lateral entorhinal cortex-DG/CA3 functional connectivity in LP individuals during pattern separation of negative information. During negative false recognition, LP individuals showed increased medial temporal lobe functional connectivity, consistent with overgeneralization. Overall, these results suggest a novel mechanistic account of individual differences in emotional memory alterations exhibited in aging. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Age-related individual variability in memory performance is associated with amygdala-hippocampal circuit function and emotional pattern separation

    PubMed Central

    Leal, Stephanie L.; Noche, Jessica A.; Murray, Elizabeth A.; Yassa, Michael A.

    2018-01-01

    While aging is generally associated with episodic memory decline, not all older adults exhibit memory loss. Furthermore, emotional memories are not subject to the same extent of forgetting and appear preserved in aging. We conducted high-resolution fMRI during a task involving pattern separation of emotional information in older adults with and without age-related memory impairment (characterized by performance on a word-list learning task: low performers: LP vs. high performers: HP). We found signals consistent with emotional pattern separation in hippocampal dentate (DG)/CA3 in HP but not in LP individuals, suggesting a deficit in emotional pattern separation. During false recognition, we found increased DG/CA3 activity in LP individuals, suggesting that hyperactivity may be associated with overgeneralization. We additionally observed a selective deficit in basolateral amygdala—lateral entorhinal cortex—DG/CA3 functional connectivity in LP individuals during pattern separation of negative information. During negative false recognition, LP individuals showed increased medial temporal lobe functional connectivity, consistent with overgeneralization. Overall, these results suggest a novel mechanistic account of individual differences in emotional memory alterations exhibited in aging. PMID:27723500

  1. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  2. Multifunctional wearable devices for diagnosis and therapy of movement disorders.

    PubMed

    Son, Donghee; Lee, Jongha; Qiao, Shutao; Ghaffari, Roozbeh; Kim, Jaemin; Lee, Ji Eun; Song, Changyeong; Kim, Seok Joo; Lee, Dong Jun; Jun, Samuel Woojoo; Yang, Shixuan; Park, Minjoon; Shin, Jiho; Do, Kyungsik; Lee, Mincheol; Kang, Kwanghun; Hwang, Cheol Seong; Lu, Nanshu; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2014-05-01

    Wearable systems that monitor muscle activity, store data and deliver feedback therapy are the next frontier in personalized medicine and healthcare. However, technical challenges, such as the fabrication of high-performance, energy-efficient sensors and memory modules that are in intimate mechanical contact with soft tissues, in conjunction with controlled delivery of therapeutic agents, limit the wide-scale adoption of such systems. Here, we describe materials, mechanics and designs for multifunctional, wearable-on-the-skin systems that address these challenges via monolithic integration of nanomembranes fabricated with a top-down approach, nanoparticles assembled by bottom-up methods, and stretchable electronics on a tissue-like polymeric substrate. Representative examples of such systems include physiological sensors, non-volatile memory and drug-release actuators. Quantitative analyses of the electronics, mechanics, heat-transfer and drug-diffusion characteristics validate the operation of individual components, thereby enabling system-level multifunctionalities.

  3. Cognitive load and task condition in event- and time-based prospective memory: an experimental investigation.

    PubMed

    Khan, Azizuddin; Sharma, Narendra K; Dixit, Shikha

    2008-09-01

    Prospective memory is memory for the realization of delayed intention. Researchers distinguish 2 kinds of prospective memory: event- and time-based (G. O. Einstein & M. A. McDaniel, 1990). Taking that distinction into account, the present authors explored participants' comparative performance under event- and time-based tasks. In an experimental study of 80 participants, the authors investigated the roles of cognitive load and task condition in prospective memory. Cognitive load (low vs. high) and task condition (event- vs. time-based task) were the independent variables. Accuracy in prospective memory was the dependent variable. Results showed significant differential effects under event- and time-based tasks. However, the effect of cognitive load was more detrimental in time-based prospective memory. Results also revealed that time monitoring is critical in successful performance of time estimation and so in time-based prospective memory. Similarly, participants' better performance on the event-based prospective memory task showed that they acted on the basis of environment cues. Event-based prospective memory was environmentally cued; time-based prospective memory required self-initiation.

  4. Development of 3-Year Roadmap to Transform the Discipline of Systems Engineering

    DTIC Science & Technology

    2010-03-31

    quickly humans could physically construct them. Indeed, magnetic core memory was entirely constructed by human hands until it was superseded by...For their mainframe computers, IBM develops the applications, operating system, computer hardware and microprocessors (off the shelf standard memory ...processor developers work on potential computational and memory pipelines to support the required performance capabilities and use the available transistors

  5. Application of source biasing technique for energy efficient DECODER circuit design: memory array application

    NASA Astrophysics Data System (ADS)

    Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav

    2018-04-01

    Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.

  6. Using Rollback Avoidance to Mitigate Failures in Next-Generation Extreme-Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Scott N.

    2016-05-01

    High-performance computing (HPC) systems enable scientists to numerically model complex phenomena in many important physical systems. The next major milestone in the development of HPC systems is the construction of the rst supercomputer capable executing more than an exa op, 10 18 oating point operations per second. On systems of this scale, failures will occur much more frequently than on current systems. As a result, resilience is a key obstacle to building next-generation extremescale systems. Coordinated checkpointing is currently the most widely-used mechanism for handling failures on HPC systems. Although coordinated checkpointing remains e ective on current systems, increasing themore » scale of today's systems to build next-generation systems will increase the cost of fault tolerance as more and more time is taken away from the application to protect against or recover from failure. Rollback avoidance techniques seek to mitigate the cost of checkpoint/restart by allowing an application to continue its execution rather than rolling back to an earlier checkpoint when failures occur. These techniqes include failure prediction and preventive migration, replicated computation, fault-tolerant algorithms, and softwarebased memory fault correction. In this thesis, we examine how rollback avoidance techniques can be used to address failures on extreme-scale systems. Using a combination of analytic modeling and simulation, we evaluate the potential impact of rollback avoidance on these systems. We then present a novel rollback avoidance technique that exploits similarities in application memory. Finally, we examine the feasibility of using this technique to protect against memory faults in kernel memory.« less

  7. Mapping the Proxies of Memory and Learning Function in Senior Adults with High-performing, Normal Aging and Neurocognitive Disorders.

    PubMed

    Lu, Hanna; Xi, Ni; Fung, Ada W T; Lam, Linda C W

    2018-06-09

    Memory and learning, as the core brain function, shows controversial results across studies focusing on aging and dementia. One of the reasons is because of the multi-faceted nature of memory and learning. However, there is still a dearth of comparable proxies with psychometric and morphometric portrait in clinical and non-clinical populations. We aim to investigate the proxies of memory and learning function with direct and derived measures and examine their associations with morphometric features in senior adults with different cognitive status. Based on two modality-driven tests, we assessed the component-specific memory and learning in the individuals with high performing (HP), normal aging, and neurocognitive disorders (NCD) (n = 488). Structural magnetic resonance imaging was used to measure the regional cortical thickness with surface-based morphometry analysis in a subsample (n = 52). Compared with HP elderly, the ones with normal aging and minor NCD showed declined recognition memory and working memory, whereas had better learning performance (derived scores). Meanwhile, major NCD patients showed more breakdowns of memory and learning function. The correlation between proxies of memory and learning and cortical thickness exhibited the overlapped and unique neural underpinnings. The proxies of memory and learning could be characterized by component-specific constructs with psychometric and morphometric bases. Overall, the constructs of memory are more likely related to the pathological changes, and the constructs of learning tend to reflect the cognitive abilities of compensation.

  8. Digital Equipment Corporation VAX/VMS Version 4.3

    DTIC Science & Technology

    1986-07-30

    operating system performs process-oriented paging that allows execution of programs that may be larger than the physical memory allocated to them... to higher privileged modes. (For an explanation of how the four access modes provide memory access protection see page 9, "Memory Management".) A... to optimize program performance for real-time applications or interactive environments. July 30, 1986 - 4 - Final Evaluation Report Digital VAX/VMS

  9. Acute effects of alcohol on the development of intrusive memories.

    PubMed

    Bisby, James A; Brewin, Chris R; Leitz, Julie R; Valerie Curran, H

    2009-07-01

    Post-traumatic stress disorder is characterised by repeated intrusive imagery of the traumatic event. Despite alcohol's impairing effect on memory and frequent involvement in real-life trauma, virtually nothing is known of the interaction between alcohol and trauma memory. We aimed to investigate the acute alcohol effects on spontaneous memories following a trauma film as well as explicit memory for the film. Utilising an independent-group double-blind design, 48 healthy volunteers were randomly allocated to receive alcohol of 0.4 or 0.8 g/kg or a matched placebo drink. A stressful film was viewed post-drink. Skin conductance was monitored throughout and mood and dissociative symptoms were indexed. Volunteers recorded their spontaneous memories of the film daily in an online diary over the following week. Their explicit memory for both gist and details of the film was tested on day 7. Intriguingly, an inverted 'U' alcohol dose-response was observed on intrusive memories with a low dose of alcohol increasing memory intrusions while a high dose decreased intrusions. In contrast, explicit memory performance after 7 days showed a linear dose-response effect of alcohol with both recall and recognition decreasing as dose increased. These findings highlight a striking differential pattern of alcohol's effects on spontaneous memories as compared with explicit memories. Alcohol's effect on spontaneous memories may reflect a dose-dependent impairment of two separate memory systems integral to the processing of different aspects of a traumatic event.

  10. High performance non-volatile ferroelectric copolymer memory based on a ZnO nanowire transistor fabricated on a transparent substrate

    NASA Astrophysics Data System (ADS)

    Nedic, Stanko; Tea Chun, Young; Hong, Woong-Ki; Chu, Daping; Welland, Mark

    2014-01-01

    A high performance ferroelectric non-volatile memory device based on a top-gate ZnO nanowire (NW) transistor fabricated on a glass substrate is demonstrated. The ZnO NW channel was spin-coated with a poly (vinylidenefluoride-co-trifluoroethylene) (P(VDF-TrFE)) layer acting as a top-gate dielectric without buffer layer. Electrical conductance modulation and memory hysteresis are achieved by a gate electric field induced reversible electrical polarization switching of the P(VDF-TrFE) thin film. Furthermore, the fabricated device exhibits a memory window of ˜16.5 V, a high drain current on/off ratio of ˜105, a gate leakage current below ˜300 pA, and excellent retention characteristics for over 104 s.

  11. Effects of exercise intensity on spatial memory performance and hippocampal synaptic plasticity in transient brain ischemic rats.

    PubMed

    Shih, Pei-Cheng; Yang, Yea-Ru; Wang, Ray-Yau

    2013-01-01

    Memory impairment is commonly noted in stroke survivors, and can lead to delay of functional recovery. Exercise has been proved to improve memory in adult healthy subjects. Such beneficial effects are often suggested to relate to hippocampal synaptic plasticity, which is important for memory processing. Previous evidence showed that in normal rats, low intensity exercise can improve synaptic plasticity better than high intensity exercise. However, the effects of exercise intensities on hippocampal synaptic plasticity and spatial memory after brain ischemia remain unclear. In this study, we investigated such effects in brain ischemic rats. The middle cerebral artery occlusion (MCAO) procedure was used to induce brain ischemia. After the MCAO procedure, rats were randomly assigned to sedentary (Sed), low-intensity exercise (Low-Ex), or high-intensity exercise (High-Ex) group. Treadmill training began from the second day post MCAO procedure, 30 min/day for 14 consecutive days for the exercise groups. The Low-Ex group was trained at the speed of 8 m/min, while the High-Ex group at the speed of 20 m/min. The spatial memory, hippocampal brain-derived neurotrophic factor (BDNF), synapsin-I, postsynaptic density protein 95 (PSD-95), and dendritic structures were examined to document the effects. Serum corticosterone level was also quantified as stress marker. Our results showed the Low-Ex group, but not the High-Ex group, demonstrated better spatial memory performance than the Sed group. Dendritic complexity and the levels of BDNF and PSD-95 increased significantly only in the Low-Ex group as compared with the Sed group in bilateral hippocampus. Notably, increased level of corticosterone was found in the High-Ex group, implicating higher stress response. In conclusion, after brain ischemia, low intensity exercise may result in better synaptic plasticity and spatial memory performance than high intensity exercise; therefore, the intensity is suggested to be considered during exercise training.

  12. Transactive memory systems scale for couples: development and validation

    PubMed Central

    Hewitt, Lauren Y.; Roberts, Lynne D.

    2015-01-01

    People in romantic relationships can develop shared memory systems by pooling their cognitive resources, allowing each person access to more information but with less cognitive effort. Research examining such memory systems in romantic couples largely focuses on remembering word lists or performing lab-based tasks, but these types of activities do not capture the processes underlying couples’ transactive memory systems, and may not be representative of the ways in which romantic couples use their shared memory systems in everyday life. We adapted an existing measure of transactive memory systems for use with romantic couples (TMSS-C), and conducted an initial validation study. In total, 397 participants who each identified as being a member of a romantic relationship of at least 3 months duration completed the study. The data provided a good fit to the anticipated three-factor structure of the components of couples’ transactive memory systems (specialization, credibility and coordination), and there was reasonable evidence of both convergent and divergent validity, as well as strong evidence of test–retest reliability across a 2-week period. The TMSS-C provides a valuable tool that can quickly and easily capture the underlying components of romantic couples’ transactive memory systems. It has potential to help us better understand this intriguing feature of romantic relationships, and how shared memory systems might be associated with other important features of romantic relationships. PMID:25999873

  13. Hysteresis and memory factor of the Kerr effect in blue phases

    NASA Astrophysics Data System (ADS)

    Nordendorf, Gaby; Lorenz, Alexander; Hoischen, Andreas; Schmidtke, Jürgen; Kitzerow, Heinz; Wilkes, David; Wittek, Michael

    2013-11-01

    The performance of a polymer-stabilized blue phase system based on a nematic host with large dielectric anisotropy and a chiral dopant with high helical twisting power is investigated and the influence of the reactive monomer composition on the electro-optic characteristics is studied. Field-induced birefringence with a Kerr coefficient greater than 1 nm V-2 can be achieved in a large temperature range from well below 20 °C to above 55 °C. The disturbing influences of electro-optic hysteresis and memory effects can be reduced by diligent choice of the composition and appropriate electric addressing.

  14. Enabling the First Ever Measurement of Coherent Neutrino Scattering Through Background Neutron Measurements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyna, David; Betty, Rita

    Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis,thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities. The purpose of this project was to computationally model the impact of neural population dynamics within the neurobiological memory system in order to examine how subareas in the brain enable pattern separation and completion of information in memory across time as associated experiences.

  15. Hold-up power supply for flash memory

    NASA Technical Reports Server (NTRS)

    Ott, William E. (Inventor)

    2004-01-01

    A hold-up power supply for flash memory systems is provided. The hold-up power supply provides the flash memory with the power needed to temporarily operate when a power loss exists. This allows the flash memory system to complete any erasures and writes, and thus allows it to shut down gracefully. The hold-up power supply detects when a power loss on a power supply bus is occurring and supplies the power needed for the flash memory system to temporally operate. The hold-up power supply stores power in at least one capacitor. During normal operation, power from a high voltage supply bus is used to charge the storage capacitors. When a power supply loss is detected, the power supply bus is disconnected from the flash memory system. A hold-up controller controls the power flow from the storage capacitors to the flash memory system. The hold-up controller uses feedback to assure that the proper voltage is provided from the storage capacitors to the flash memory system. This power supplied by the storage capacitors allows the flash memory system to complete any erasures and writes, and thus allows the flash memory system to shut down gracefully.

  16. JPRS Report, Science & Technology, China, High-Performance Computer Systems

    DTIC Science & Technology

    1992-10-28

    microprocessor array The microprocessor array in the AP85 system is com- posed of 16 completely identical array element micro - processors . Each array element...microprocessors and capable of host machine reading and writing. The memory capacity of the array element micro - processors as a whole can be expanded...transmission functions to carry out data transmission from array element micro - processor to array element microprocessor, from array element

  17. Transferable and flexible label-like macromolecular memory on arbitrary substrates with high performance and a facile methodology.

    PubMed

    Lai, Ying-Chih; Hsu, Fang-Chi; Chen, Jian-Yu; He, Jr-Hau; Chang, Ting-Chang; Hsieh, Ya-Ping; Lin, Tai-Yuan; Yang, Ying-Jay; Chen, Yang-Fang

    2013-05-21

    A newly designed transferable and flexible label-like organic memory based on a graphene electrode behaves like a sticker, and can be readily placed on desired substrates or devices for diversified purposes. The memory label reveals excellent performance despite its physical presentation. This may greatly extend the memory applications in various advanced electronics and provide a simple scheme to integrate with other electronics. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Functional Brain Network Modularity Captures Inter- and Intra-Individual Variation in Working Memory Capacity

    PubMed Central

    Stevens, Alexander A.; Tappon, Sarah C.; Garg, Arun; Fair, Damien A.

    2012-01-01

    Background Cognitive abilities, such as working memory, differ among people; however, individuals also vary in their own day-to-day cognitive performance. One potential source of cognitive variability may be fluctuations in the functional organization of neural systems. The degree to which the organization of these functional networks is optimized may relate to the effective cognitive functioning of the individual. Here we specifically examine how changes in the organization of large-scale networks measured via resting state functional connectivity MRI and graph theory track changes in working memory capacity. Methodology/Principal Findings Twenty-two participants performed a test of working memory capacity and then underwent resting-state fMRI. Seventeen subjects repeated the protocol three weeks later. We applied graph theoretic techniques to measure network organization on 34 brain regions of interest (ROI). Network modularity, which measures the level of integration and segregation across sub-networks, and small-worldness, which measures global network connection efficiency, both predicted individual differences in memory capacity; however, only modularity predicted intra-individual variation across the two sessions. Partial correlations controlling for the component of working memory that was stable across sessions revealed that modularity was almost entirely associated with the variability of working memory at each session. Analyses of specific sub-networks and individual circuits were unable to consistently account for working memory capacity variability. Conclusions/Significance The results suggest that the intrinsic functional organization of an a priori defined cognitive control network measured at rest provides substantial information about actual cognitive performance. The association of network modularity to the variability in an individual's working memory capacity suggests that the organization of this network into high connectivity within modules and sparse connections between modules may reflect effective signaling across brain regions, perhaps through the modulation of signal or the suppression of the propagation of noise. PMID:22276205

  19. Conversion of short-term to long-term memory in the novel object recognition paradigm

    PubMed Central

    Moore, Shannon J.; Deshpande, Kaivalya; Stinnett, Gwen S.; Seasholtz, Audrey F.; Murphy, Geoffrey G.

    2013-01-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. PMID:23835143

  20. Conversion of short-term to long-term memory in the novel object recognition paradigm.

    PubMed

    Moore, Shannon J; Deshpande, Kaivalya; Stinnett, Gwen S; Seasholtz, Audrey F; Murphy, Geoffrey G

    2013-10-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A.; Chen, Ying-Cheng

    2018-05-01

    Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.

  2. Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency.

    PubMed

    Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A; Chen, Ying-Cheng

    2018-05-04

    Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.

  3. Ontogeny of sensorimotor gating and short-term memory processing throughout the adolescent period in rats.

    PubMed

    Goepfrich, Anja A; Friemel, Chris M; Pauen, Sabina; Schneider, Miriam

    2017-06-01

    Adolescence and puberty are highly susceptible developmental periods during which the neuronal organization and maturation of the brain is completed. The endocannabinoid (eCB) system, which is well known to modulate cognitive processing, undergoes profound and transient developmental changes during adolescence. With the present study we were aiming to examine the ontogeny of cognitive skills throughout adolescence in male rats and clarify the potential modulatory role of CB1 receptor signalling. Cognitive skills were assessed repeatedly every 10th day in rats throughout adolescence. All animals were tested for object recognition memory and prepulse inhibition of the acoustic startle reflex. Although cognitive performance in short-term memory as well as sensorimotor gating abilities were decreased during puberty compared to adulthood, both tasks were found to show different developmental trajectories throughout adolescence. A low dose of the CB1 receptor antagonist/inverse agonist SR141716 was found to improve recognition memory specifically in pubertal animals while not affecting behavioral performance at other ages tested. The present findings demonstrate that the developmental trajectory of cognitive abilities does not occur linearly for all cognitive processes and is strongly influenced by pubertal maturation. Developmental alterations within the eCB system at puberty onset may be involved in these changes in cognitive processing. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  5. Long-term consolidation of declarative memory: insight from temporal lobe epilepsy.

    PubMed

    Tramoni, Eve; Felician, Olivier; Barbeau, Emmanuel J; Guedj, Eric; Guye, Maxime; Bartolomei, Fabrice; Ceccaldi, Mathieu

    2011-03-01

    Several experiments carried out with a subset of patients with temporal lobe epilepsy have demonstrated normal memory performance at standard delays of recall (i.e. minutes to hours) but impaired performance over longer delays (i.e. days or weeks), suggesting altered long-term consolidation mechanisms. These mechanisms were specifically investigated in a group of five adult-onset pharmaco-sensitive patients with temporal lobe epilepsy, exhibiting severe episodic memory complaints despite normal performance at standardized memory assessment. In a first experiment, the magnitude of autobiographical memory loss was evaluated using retrograde personal memory tasks based on verbal and visual cues. In both conditions, results showed an unusual U-shaped pattern of personal memory impairment, encompassing most of the patients' life, sparing however, periods of the childhood, early adulthood and past several weeks. This profile was suggestive of a long-term consolidation impairment of personal episodes, adequately consolidated over 'short-term' delays but gradually forgotten thereafter. Therefore, in a subsequent experiment, patients were submitted to a protocol specifically devised to investigate short and long-term consolidation of contextually-bound experiences (episodic memory) and context-free information (semantic knowledge and single-items). In the short term (1 h), performance at both contextually-free and contextually-bound memory tasks was intact. After a 6-week delay, however, contextually-bound memory performance was impaired while contextually-free memory performance remained preserved. This effect was independent of task difficulty and the modality of retrieval (recall and recognition). Neuroimaging studies revealed the presence of mild metabolic changes within medial temporal lobe structures. Taken together, these results show the existence of different consolidation systems within declarative memory. They suggest that mild medial temporal lobe dysfunction can impede the building and stabilization of episodic memories but leaves long-term semantic and single-items mnemonic traces intact.

  6. HEC Applications on Columbia Project

    NASA Technical Reports Server (NTRS)

    Taft, Jim

    2004-01-01

    NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further

  7. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  8. Process Performance of Optima XEx Single Wafer High Energy Implanter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J. H.; Yoon, Jongyoon; Kondratenko, S.

    2011-01-07

    To meet the process requirements for well formation in future CMOS memory production, high energy implanters require more robust angle, dose, and energy control while maintaining high productivity. The Optima XEx high energy implanter meets these requirements by integrating a traditional LINAC beamline with a robust single wafer handling system. To achieve beam angle control, Optima XEx can control both the horizontal and vertical beam angles to within 0.1 degrees using advanced beam angle measurement and correction. Accurate energy calibration and energy trim functions accelerate process matching by eliminating energy calibration errors. The large volume process chamber and UDC (upstreammore » dose control) using faraday cups outside of the process chamber precisely control implant dose regardless of any chamber pressure increase due to PR (photoresist) outgassing. An optimized RF LINAC accelerator improves reliability and enables singly charged phosphorus and boron energies up to 1200 keV and 1500 keV respectively with higher beam currents. A new single wafer endstation combined with increased beam performance leads to overall increased productivity. We report on the advanced performance of Optima XEx observed during tool installation and volume production at an advanced memory fab.« less

  9. [Occupational complexity and late-life memory and reasoning abilities].

    PubMed

    Ishioka, Yoshiko; Gondo, Yasuyuki; Masui, Yukie; Nakagawa, Takeshi; Tabuchi, Megumi; Ogawa, Madoka; Kamide, Kei; Ikebe, Kazunori; Arai, Yasumichi; Ishizaki, Tatsuro; Takahashi, Ryutaro

    2015-08-01

    This study examined the associations between the complexity of an individual's primary lifetime occupation and his or her late-life memory and reasoning performance, using data from 824 community-dwelling participants aged 69-72 years. The complexity of work with data, people, and things was evaluated based on the Japanese job complexity score. The associations between occupational complexity and participant's memory and reasoning abilities were examined in multiple regression analyses. An association was found between more comple work with people and higher memory performance, as well as between more complex work with data and higher reasoning performance, after having controlled for gender, school records, and education. Further, an interaction effect was observed between gender and complexity of work with data in relation to reasoning performance: work involving a high degree of complexity with data was associated with high reasoning performance in men. These findings suggest the need to consider late-life cognitive functioning within the context of adulthood experiences, specifically those related to occupation and gender.

  10. Relationship between cardiac autonomic function and cognitive function in Alzheimer's disease.

    PubMed

    Nonogaki, Zen; Umegaki, Hiroyuki; Makino, Taeko; Suzuki, Yusuke; Kuzuya, Masafumi

    2017-01-01

    Alzheimer's disease (AD) affects many central nervous structures and neurotransmitter systems. These changes affect not only cognitive function, but also cardiac autonomic function. However, the functional relationship between cardiac autonomic function and cognition in AD has not yet been investigated. The objective of the present study was to evaluate the association between cardiac autonomic function measured by heart rate variability and cognitive function in AD. A total of 78 AD patients were recruited for this study. Cardiac autonomic function was evaluated using heart rate variability analysis. Multiple linear regression analysis was used to model the association between heart rate variability and cognitive function (global cognitive function, memory, executive function and processing speed), after adjustment for covariates. Global cognitive function was negatively associated with sympathetic modulation (low-to-high frequency power ratio). Memory performance was positively associated with parasympathetic modulation (high frequency power) and negatively associated with sympathetic modulation (low-to-high frequency power ratio). These associations were independent of age, sex, educational years, diabetes, hypertension and cholinesterase inhibitor use. Cognitive function, especially in the areas of memory, is associated with cardiac autonomic function in AD. Specifically, lower cognitive performance was found to be associated with significantly higher cardiac sympathetic and lower parasympathetic function in AD. Geriatr Gerontol Int 2017; 17: 92-98. © 2015 Japan Geriatrics Society.

  11. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    DTIC Science & Technology

    2004-07-01

    steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate

  12. Systemic lupus erythematosus with organic brain syndrome: serial electroencephalograms accurately evaluate therapeutic efficacy.

    PubMed

    Kato, Takashi; Shiratori, Kyoji; Kobashigawa, Tsuyoshi; Hidaka, Yuji

    2006-01-01

    A 48-year-old man with systemic lupus erythematosus developed organic brain syndrome. High-dose prednisolone was ineffective, and somnolence without focal signs rapidly developed. Electroencephalogram (EEG) demonstrated a slow basic rhythm (3 Hz), but brain magnetic resonance imaging was normal. Somnolence resolved soon after performing plasma exchange (two sessions). However, memory dysfunction persisted, with EEG demonstrating mild abnormalities (7-8 Hz basic rhythm). Double-filtration plasmapheresis (three sessions) was done, followed by intravenous cyclophosphamide. Immediately after the first plasmapheresis session, memory dysfunction began to improve. After the second dose of cyclophosphamide, intellectual function resolved completely and EEG findings also normalized (basic rhythm of 10 Hz waves). Serial EEG findings precisely reflected the neurological condition and therapeutic efficacy in this patient. In contrast, protein levels in cerebrospinal fluid remained high and did not seem to appropriately reflect the neurological condition in this patient.

  13. Results from prototype die-to-database reticle inspection system

    NASA Astrophysics Data System (ADS)

    Mu, Bo; Dayal, Aditya; Broadbent, Bill; Lim, Phillip; Goonesekera, Arosha; Chen, Chunlin; Yeung, Kevin; Pinto, Becky

    2009-03-01

    A prototype die-to-database high-resolution reticle defect inspection system has been developed for 32nm and below logic reticles, and 4X Half Pitch (HP) production and 3X HP development memory reticles. These nodes will use predominantly 193nm immersion lithography (with some layers double patterned), although EUV may also be used. Many different reticle types may be used for these generations including: binary (COG, EAPSM), simple tritone, complex tritone, high transmission, dark field alternating (APSM), mask enhancer, CPL, and EUV. Finally, aggressive model based OPC is typically used, which includes many small structures such as jogs, serifs, and SRAF (sub-resolution assist features), accompanied by very small gaps between adjacent structures. The architecture and performance of the prototype inspection system is described. This system is designed to inspect the aforementioned reticle types in die-todatabase mode. Die-to-database inspection results are shown on standard programmed defect test reticles, as well as advanced 32nm logic, and 4X HP and 3X HP memory reticles from industry sources. Direct comparisons with currentgeneration inspection systems show measurable sensitivity improvement and a reduction in false detections.

  14. InSync Adaptive Traffic Control System for the Veterans Memorial Hwy Corridor on Long Island, NY

    DOT National Transportation Integrated Search

    2012-08-01

    This report documents Rhythm Engineerings adaptive traffic control system field installation performed : by New York State Department of Transportation (NYSDOT) along Veterans Memorial Hwy in Long : Island, NY. This report reviews the reason for t...

  15. A Computerized Evaluation of Sensory Memory and Short-term Memory Impairment After Rapid Ascent to 4280 m.

    PubMed

    Shi, Qing Hai; Ge, Di; Zhao, Wei; Ma, Xue; Hu, Ke Yan; Lu, Yao; Liu, Zheng Xiang; Ran, Ji Hua; Li, Xiao Ling; Zhou, Yu; Fu, Jian Feng

    2016-06-01

    To evaluate the effect of acute high-altitude exposure on sensory and short-term memory using interactive software, we transported 30 volunteers in a sport utility vehicle to a 4280 m plateau within 3 h. We measured their memory performance on the plain (initial arrival) and 3 h after arrival on the plateau using six measures. Memory performance was significantly poorer on the plateau by four of the six measures. Furthermore, memory performance was significantly poorer in the acute mountain sickness (AMS) group than in the non-AMS group by five of the six measures. These findings indicate that rapid ascent to 4280 m and remaining at this altitude for 3 h resulted in decreased sensory and short-term memory, particularly among participants who developed AMS. Copyright © 2016 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  16. A critical evaluation of monkey models of amnesia and dementia.

    PubMed

    Ridley, R M; Baker, H F

    1991-01-01

    In this review we consider various models of amnesia and dementia in monkeys and examine the validity of such models. In Section 2 we describe the various types of memory tests (tasks) available for use with monkeys and discuss the extent to which these tasks assess different facets of memory according to present theories of human memory. We argue that the rules which govern correct task performance are best regarded as a form of semantic rather than procedural memory, and that when information about stimulus attributes or reward associations is stored long-term then that knowledge is semantic. The demonstration of episodic memory in monkeys is problematic and the term recognition memory has been used too loosely. In particular, it is difficult to dissociate episodic memory for stimulus events from the use of semantic memory for the rule of the task, since dysfunction of either can produce impairment on performance of the same task. Tasks can also be divided into those which assess memory for stimulus-reward associations (evaluative memory) and those which tax stimulus-response associations including spatial and conditional responding (non-evaluative memory). This dissociation cuts across the distinction between semantic and episodic memory. In Section 3 we examine the usefulness of the classification of tasks described in Section 2 in clarifying our understanding of the contribution of the temporal lobes and the cholinergic system to memory. We conclude that evaluative and non-evaluative memory are mediated by separate parallel systems involving the amygdala and hippocampus, respectively.

  17. Fusiform gyrus volume reduction and facial recognition in chronic schizophrenia.

    PubMed

    Onitsuka, Toshiaki; Shenton, Martha E; Kasai, Kiyoto; Nestor, Paul G; Toner, Sarah K; Kikinis, Ron; Jolesz, Ferenc A; McCarley, Robert W

    2003-04-01

    The fusiform gyrus (FG), or occipitotemporal gyrus, is thought to subserve the processing and encoding of faces. Of note, several studies have reported that patients with schizophrenia show deficits in facial processing. It is thus hypothesized that the FG might be one brain region underlying abnormal facial recognition in schizophrenia. The objectives of this study were to determine whether there are abnormalities in gray matter volumes for the anterior and the posterior FG in patients with chronic schizophrenia and to investigate relationships between FG subregions and immediate and delayed memory for faces. Patients were recruited from the Boston VA Healthcare System, Brockton Division, and control subjects were recruited through newspaper advertisement. Study participants included 21 male patients diagnosed as having chronic schizophrenia and 28 male controls. Participants underwent high-spatial-resolution magnetic resonance imaging, and facial recognition memory was evaluated. Main outcome measures included anterior and posterior FG gray matter volumes based on high-spatial-resolution magnetic resonance imaging, a detailed and reliable manual delineation using 3-dimensional information, and correlation coefficients between FG subregions and raw scores on immediate and delayed facial memory derived from the Wechsler Memory Scale III. Patients with chronic schizophrenia had overall smaller FG gray matter volumes (10%) than normal controls. Additionally, patients with schizophrenia performed more poorly than normal controls in both immediate and delayed facial memory tests. Moreover, the degree of poor performance on delayed memory for faces was significantly correlated with the degree of bilateral anterior FG reduction in patients with schizophrenia. These results suggest that neuroanatomic FG abnormalities underlie at least some of the deficits associated with facial recognition in schizophrenia.

  18. Feedforward-Feedback Hybrid Control for Magnetic Shape Memory Alloy Actuators Based on the Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan

    2014-01-01

    As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system. PMID:24828010

  19. Feedforward-feedback hybrid control for magnetic shape memory alloy actuators based on the Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan

    2014-01-01

    As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system.

  20. Primary motor and premotor cortex in implicit sequence learning--evidence for competition between implicit and explicit human motor memory systems.

    PubMed

    Kantak, Shailesh S; Mummidisetty, Chaithanya K; Stinear, James W

    2012-09-01

    Implicit and explicit memory systems for motor skills compete with each other during and after motor practice. Primary motor cortex (M1) is known to be engaged during implicit motor learning, while dorsal premotor cortex (PMd) is critical for explicit learning. To elucidate the neural substrates underlying the interaction between implicit and explicit memory systems, adults underwent a randomized crossover experiment of anodal transcranial direct current stimulation (AtDCS) applied over M1, PMd or sham stimulation during implicit motor sequence (serial reaction time task, SRTT) practice. We hypothesized that M1-AtDCS during practice will enhance online performance and offline learning of the implicit motor sequence. In contrast, we also hypothesized that PMd-AtDCS will attenuate performance and retention of the implicit motor sequence. Implicit sequence performance was assessed at baseline, at the end of acquisition (EoA), and 24 h after practice (retention test, RET). M1-AtDCS during practice significantly improved practice performance and supported offline stabilization compared with Sham tDCS. Performance change from EoA to RET revealed that PMd-AtDCS during practice attenuated offline stabilization compared with M1-AtDCS and sham stimulation. The results support the role of M1 in implementing online performance gains and offline stabilization for implicit motor sequence learning. In contrast, enhancing the activity within explicit motor memory network nodes such as the PMd during practice may be detrimental to offline stabilization of the learned implicit motor sequence. These results support the notion of competition between implicit and explicit motor memory systems and identify underlying neural substrates that are engaged in this competition. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

Top