Sample records for efficient memory management

  1. Flash memory management system and method utilizing multiple block list windows

    NASA Technical Reports Server (NTRS)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  2. VOP memory management in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2001-03-01

    MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.

  3. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  4. Improving Working Memory Efficiency by Reframing Metacognitive Interpretation of Task Difficulty

    ERIC Educational Resources Information Center

    Autin, Frederique; Croizet, Jean-Claude

    2012-01-01

    Working memory capacity, our ability to manage incoming information for processing purposes, predicts achievement on a wide range of intellectual abilities. Three randomized experiments (N = 310) tested the effectiveness of a brief psychological intervention designed to boost working memory efficiency (i.e., state working memory capacity) by…

  5. Memory management in genome-wide association studies

    PubMed Central

    2009-01-01

    Genome-wide association is a powerful tool for the identification of genes that underlie common diseases. Genome-wide association studies generate billions of genotypes and pose significant computational challenges for most users including limited computer memory. We applied a recently developed memory management tool to two analyses of North American Rheumatoid Arthritis Consortium studies and measured the performance in terms of central processing unit and memory usage. We conclude that our memory management approach is simple, efficient, and effective for genome-wide association studies. PMID:20018047

  6. Healthcare knowledge management through building and operationalising healthcare enterprise memory.

    PubMed

    Cheah, Y N; Abidi, S S

    1999-01-01

    In this paper we suggest that the healthcare enterprise needs to be more conscious of its vast knowledge resources vis-à-vis the exploitation of knowledge management techniques to efficiently manage its knowledge. The development of healthcare enterprise memory is suggested as a solution, together with a novel approach advocating the operationalisation of healthcare enterprise memories leading to the modelling of healthcare processes for strategic planning. As an example, we present a simulation of Service Delivery Time in a hospital's OPD.

  7. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig Michael; Verzi, Stephen Joseph

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less

  8. Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control

    PubMed Central

    Loft, Shayne; Smith, Rebekah E.; Remington, Roger

    2015-01-01

    Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825

  9. Configurable memory system and method for providing atomic counting operations in a memory device

    DOEpatents

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  10. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  11. APINetworks Java. A Java approach to the efficient treatment of large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Muñoz-Caro, Camelia; Niño, Alfonso; Reyes, Sebastián; Castillo, Miriam

    2016-10-01

    We present a new version of the core structural package of our Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. The new version is written in Java and presents several advantages over the previous C++ version: the portability of the Java code, the easiness of object-oriented design implementations, and the simplicity of memory management. In addition, some additional data structures are introduced for storing the sets of nodes and edges. Also, by resorting to the different garbage collectors currently available in the JVM the Java version is much more efficient than the C++ one with respect to memory management. In particular, the G1 collector is the most efficient one because of the parallel execution of G1 and the Java application. Using G1, APINetworks Java outperforms the C++ version and the well-known NetworkX and JGraphT packages in the building and BFS traversal of linear and complete networks. The better memory management of the present version allows for the modeling of much larger networks.

  12. Efficient accesses of data structures using processing near memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayasena, Nuwan S.; Zhang, Dong Ping; Diez, Paula Aguilera

    Systems, apparatuses, and methods for implementing efficient queues and other data structures. A queue may be shared among multiple processors and/or threads without using explicit software atomic instructions to coordinate access to the queue. System software may allocate an atomic queue and corresponding queue metadata in system memory and return, to the requesting thread, a handle referencing the queue metadata. Any number of threads may utilize the handle for accessing the atomic queue. The logic for ensuring the atomicity of accesses to the atomic queue may reside in a management unit in the memory controller coupled to the memory wheremore » the atomic queue is allocated.« less

  13. User-Assisted Store Recycling for Dynamic Task Graph Schedulers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan

    The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less

  14. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  15. Multithreaded implicitly dealiased convolutions

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  16. Creative Classroom Assignment Through Database Management.

    ERIC Educational Resources Information Center

    Shah, Vivek; Bryant, Milton

    1987-01-01

    The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)

  17. A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.

    PubMed

    Peng, Chao; Sahani, Sandip; Rushing, John

    2017-10-01

    We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.

  18. Memory Management of Multimedia Services in Smart Homes

    NASA Astrophysics Data System (ADS)

    Kamel, Ibrahim; Muhaureq, Sanaa A.

    Nowadays there is a wide spectrum of applications that run in smart home environments. Consequently, home gateway, which is a central component in the smart home, must manage many applications despite limited memory resources. OSGi is a middleware standard for home gateways. OSGi models services as dependent components. Moreover, these applications might differ in their importance. Services collaborate and complement each other to achieve the required results. This paper addresses the following problem: given a home gateway that hosts several applications with different priorities and arbitrary dependencies among them. When the gateway runs out of memory, which application or service will be stopped or kicked out of memory to start a new service. Note that stopping a given service means that all the services that depend on it will be stopped too. Because of the service dependencies, traditional memory management techniques, in the operating system literatures might not be efficient. Our goal is to stop the least important and the least number of services. The paper presents a novel algorithm for home gateway memory management. The proposed algorithm takes into consideration the priority of the application and dependencies between different services, in addition to the amount of memory occupied by each service. We implement the proposed algorithm and performed many experiments to evaluate its performance and execution time. The proposed algorithm is implemented as a part of the OSGi framework (Open Service Gateway initiative). We used best fit and worst fit as yardstick to show the effectiveness of the proposed algorithm.

  19. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  20. Primary Care-Based Memory Clinics: Expanding Capacity for Dementia Care.

    PubMed

    Lee, Linda; Hillier, Loretta M; Heckman, George; Gagnon, Micheline; Borrie, Michael J; Stolee, Paul; Harvey, David

    2014-09-01

    The implementation in Ontario of 15 primary-care-based interprofessional memory clinics represented a unique model of team-based case management aimed at increasing capacity for dementia care at the primary-care level. Each clinic tracked referrals; in a subset of clinics, charts were audited by geriatricians, clinic members were interviewed, and patients, caregivers, and referring physicians completed satisfaction surveys. Across all clinics, 582 patients were assessed, and 8.9 per cent were referred to a specialist. Patients and caregivers were very satisfied with the care received, as were referring family physicians, who reported increased capacity to manage dementia. Geriatricians' chart audits revealed a high level of agreement with diagnosis and management. This study demonstrated acceptability, feasibility, and preliminary effectiveness of the primary-care memory clinic model. Led by specially trained family physicians, it provided timely access to high-quality collaborative dementia care, impacting health service utilization by more-efficient use of scarce geriatric specialist resources.

  1. An Efficient Identity-Based Key Management Scheme for Wireless Sensor Networks Using the Bloom Filter

    PubMed Central

    Qin, Zhongyuan; Zhang, Xinshuai; Feng, Kerong; Zhang, Qunfang; Huang, Jie

    2014-01-01

    With the rapid development and widespread adoption of wireless sensor networks (WSNs), security has become an increasingly prominent problem. How to establish a session key in node communication is a challenging task for WSNs. Considering the limitations in WSNs, such as low computing capacity, small memory, power supply limitations and price, we propose an efficient identity-based key management (IBKM) scheme, which exploits the Bloom filter to authenticate the communication sensor node with storage efficiency. The security analysis shows that IBKM can prevent several attacks effectively with acceptable computation and communication overhead. PMID:25264955

  2. MRI, Battelle and Bechtel to Manage National Renewable Energy Lab

    Science.gov Websites

    Research Institute (MRI), Battelle Memorial Institute and Bechtel Corp. "We believe this new team had won the competition to manage and operate NREL for the next five years. The contract signing next five years, depending on congressional appropriations for renewable energy and energy efficiency

  3. Generalized enhanced suffix array construction in external memory.

    PubMed

    Louza, Felipe A; Telles, Guilherme P; Hoffmann, Steve; Ciferri, Cristina D A

    2017-01-01

    Suffix arrays, augmented by additional data structures, allow solving efficiently many string processing problems. The external memory construction of the generalized suffix array for a string collection is a fundamental task when the size of the input collection or the data structure exceeds the available internal memory. In this article we present and analyze [Formula: see text] [introduced in CPM (External memory generalized suffix and [Formula: see text] arrays construction. In: Proceedings of CPM. pp 201-10, 2013)], the first external memory algorithm to construct generalized suffix arrays augmented with the longest common prefix array for a string collection. Our algorithm relies on a combination of buffers, induced sorting and a heap to avoid direct string comparisons. We performed experiments that covered different aspects of our algorithm, including running time, efficiency, external memory access, internal phases and the influence of different optimization strategies. On real datasets of size up to 24 GB and using 2 GB of internal memory, [Formula: see text] showed a competitive performance when compared to [Formula: see text] and [Formula: see text], which are efficient algorithms for a single string according to the related literature. We also show the effect of disk caching managed by the operating system on our algorithm. The proposed algorithm was validated through performance tests using real datasets from different domains, in various combinations, and showed a competitive performance. Our algorithm can also construct the generalized Burrows-Wheeler transform of a string collection with no additional cost except by the output time.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Zhang, Zhao

    With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less

  5. AQUAdexIM: highly efficient in-memory indexing and querying of astronomy time series images

    NASA Astrophysics Data System (ADS)

    Hong, Zhi; Yu, Ce; Wang, Jie; Xiao, Jian; Cui, Chenzhou; Sun, Jizhou

    2016-12-01

    Astronomy has always been, and will continue to be, a data-based science, and astronomers nowadays are faced with increasingly massive datasets, one key problem of which is to efficiently retrieve the desired cup of data from the ocean. AQUAdexIM, an innovative spatial indexing and querying method, performs highly efficient on-the-fly queries under users' request to search for Time Series Images from existing observation data on the server side and only return the desired FITS images to users, so users no longer need to download entire datasets to their local machines, which will only become more and more impractical as the data size keeps increasing. Moreover, AQUAdexIM manages to keep a very low storage space overhead and its specially designed in-memory index structure enables it to search for Time Series Images of a given area of the sky 10 times faster than using Redis, a state-of-the-art in-memory database.

  6. Generic Entity Resolution in Relational Databases

    NASA Astrophysics Data System (ADS)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  7. Changes in Brain Network Efficiency and Working Memory Performance in Aging

    PubMed Central

    Stanley, Matthew L.; Simpson, Sean L.; Dagenbach, Dale; Lyday, Robert G.; Burdette, Jonathan H.; Laurienti, Paul J.

    2015-01-01

    Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory. PMID:25875001

  8. Changes in brain network efficiency and working memory performance in aging.

    PubMed

    Stanley, Matthew L; Simpson, Sean L; Dagenbach, Dale; Lyday, Robert G; Burdette, Jonathan H; Laurienti, Paul J

    2015-01-01

    Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory.

  9. Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques: Applications in Power Management in Electronic Circuits.

    PubMed

    Boguslawski, Bartosz; Gripon, Vincent; Seguin, Fabrice; Heitzmann, Frédéric

    2016-02-01

    Associative memories are data structures that allow retrieval of previously stored messages given part of their content. They, thus, behave similarly to the human brain's memory that is capable, for instance, of retrieving the end of a song, given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of the bits used). Recently, a new family of sparse associative memories achieving almost optimal efficiency has been proposed. Their structure, relying on binary connections and neurons, induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that nonuniformity of the stored messages can lead to a dramatic decrease in performance. In this paper, we show the impact of nonuniformity on the performance of this recent model, and we exploit the structure of the model to improve its performance in practical applications, where data are not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with the real-world data to optimize power consumption of electronic circuits in practical test cases.

  10. Implementation of relational data base management systems on micro-computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less

  11. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classesmore » is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.« less

  12. Scalable PGAS Metadata Management on Extreme Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less

  13. Physicians' perceptions of capacity building for managing chronic disease in seniors using integrated interprofessional care models.

    PubMed

    Lee, Linda; Heckman, George; McKelvie, Robert; Jong, Philip; D'Elia, Teresa; Hillier, Loretta M

    2015-03-01

    To explore the barriers to and facilitators of adapting and expanding a primary care memory clinic model to integrate care of additional complex chronic geriatric conditions (heart failure, falls, chronic obstructive pulmonary disease, and frailty) into care processes with the goal of improving outcomes for seniors. Mixed-methods study using quantitative (questionnaires) and qualitative (interviews) methods. Ontario. Family physicians currently working in primary care memory clinic teams and supporting geriatric specialists. Family physicians currently working in memory clinic teams (n = 29) and supporting geriatric specialists(n = 9) were recruited as survey participants. Interviews were conducted with memory clinic lead physicians (n = 16).Statistical analysis was done to assess differences between family physician ratings and geriatric specialist ratings related to the capacity for managing complex chronic geriatric conditions, the role of interprofessional collaboration within primary care, and funding and staffing to support geriatric care. Results from both study methods were compared to identify common findings. Results indicate overall support for expanding the memory clinic model to integrate care for other complex conditions. However, the current primary care structure is challenged to support optimal management of patients with multiple comorbidities, particularly as related to limited funding and staffing resources. Structured training, interprofessional teams, and an active role of geriatric specialists within primary care were identified as important facilitators. The memory clinic model, as applied to other complex chronic geriatric conditions, has the potential to build capacity for high-quality primary care, improve health outcomes,promote efficient use of health care resources, and reduce healthcare costs.

  14. Makalu: fast recoverable allocation of non-volatile memory

    DOE PAGES

    Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.

    2016-10-19

    Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less

  15. Makalu: fast recoverable allocation of non-volatile memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Kumud; Chakrabarti, Dhruva R.; Boehm, Hans-J.

    Byte addressable non-volatile memory (NVRAM) is likely to supplement, and perhaps eventually replace, DRAM. Applications can then persist data structures directly in memory instead of serializing them and storing them onto a durable block device. However, failures during execution can leave data structures in NVRAM unreachable or corrupt. In this paper, we present Makalu, a system that addresses non-volatile memory management. Makalu offers an integrated allocator and recovery-time garbage collector that maintains internal consistency, avoids NVRAM memory leaks, and is efficient, all in the face of failures. We show that a careful allocator design can support a less restrictive andmore » a much more familiar programming model than existing persistent memory allocators. Our allocator significantly reduces the per allocation persistence overhead by lazily persisting non-essential metadata and by employing a post-failure recovery-time garbage collector. Experimental results show that the resulting online speed and scalability of our allocator are comparable to well-known transient allocators, and significantly better than state-of-the-art persistent allocators.« less

  16. An abstraction layer for efficient memory management of tabulated chemistry and flamelet solutions

    NASA Astrophysics Data System (ADS)

    Weise, Steffen; Messig, Danny; Meyer, Bernd; Hasse, Christian

    2013-06-01

    A large number of methods for simulating reactive flows exist, some of them, for example, directly use detailed chemical kinetics or use precomputed and tabulated flame solutions. Both approaches couple the research fields computational fluid dynamics and chemistry tightly together using either an online or offline approach to solve the chemistry domain. The offline approach usually involves a method of generating databases or so-called Lookup-Tables (LUTs). As these LUTs are extended to not only contain material properties but interactions between chemistry and turbulent flow, the number of parameters and thus dimensions increases. Given a reasonable discretisation, file sizes can increase drastically. The main goal of this work is to provide methods that handle large database files efficiently. A Memory Abstraction Layer (MAL) has been developed that handles requested LUT entries efficiently by splitting the database file into several smaller blocks. It keeps the total memory usage at a minimum using thin allocation methods and compression to minimise filesystem operations. The MAL has been evaluated using three different test cases. The first rather generic one is a sequential reading operation on an LUT to evaluate the runtime behaviour as well as the memory consumption of the MAL. The second test case is a simulation of a non-premixed turbulent flame, the so-called HM1 flame, which is a well-known test case in the turbulent combustion community. The third test case is a simulation of a non-premixed laminar flame as described by McEnally in 1996 and Bennett in 2000. Using the previously developed solver 'flameletFoam' in conjunction with the MAL, memory consumption and the performance penalty introduced were studied. The total memory used while running a parallel simulation was reduced significantly while the CPU time overhead associated with the MAL remained low.

  17. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  18. An energy-efficient MAC protocol using dynamic queue management for delay-tolerant mobile sensor networks.

    PubMed

    Li, Jie; Li, Qiyue; Qu, Yugui; Zhao, Baohua

    2011-01-01

    Conventional MAC protocols for wireless sensor network perform poorly when faced with a delay-tolerant mobile network environment. Characterized by a highly dynamic and sparse topology, poor network connectivity as well as data delay-tolerance, delay-tolerant mobile sensor networks exacerbate the severe power constraints and memory limitations of nodes. This paper proposes an energy-efficient MAC protocol using dynamic queue management (EQ-MAC) for power saving and data queue management. Via data transfers initiated by the target sink and the use of a dynamic queue management strategy based on priority, EQ-MAC effectively avoids untargeted transfers, increases the chance of successful data transmission, and makes useful data reach the target terminal in a timely manner. Experimental results show that EQ-MAC has high energy efficiency in comparison with a conventional MAC protocol. It also achieves a 46% decrease in packet drop probability, 79% increase in system throughput, and 25% decrease in mean packet delay.

  19. An Energy-Efficient MAC Protocol Using Dynamic Queue Management for Delay-Tolerant Mobile Sensor Networks

    PubMed Central

    Li, Jie; Li, Qiyue; Qu, Yugui; Zhao, Baohua

    2011-01-01

    Conventional MAC protocols for wireless sensor network perform poorly when faced with a delay-tolerant mobile network environment. Characterized by a highly dynamic and sparse topology, poor network connectivity as well as data delay-tolerance, delay-tolerant mobile sensor networks exacerbate the severe power constraints and memory limitations of nodes. This paper proposes an energy-efficient MAC protocol using dynamic queue management (EQ-MAC) for power saving and data queue management. Via data transfers initiated by the target sink and the use of a dynamic queue management strategy based on priority, EQ-MAC effectively avoids untargeted transfers, increases the chance of successful data transmission, and makes useful data reach the target terminal in a timely manner. Experimental results show that EQ-MAC has high energy efficiency in comparison with a conventional MAC protocol. It also achieves a 46% decrease in packet drop probability, 79% increase in system throughput, and 25% decrease in mean packet delay. PMID:22319385

  20. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    DTIC Science & Technology

    2017-10-04

    Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These

  1. The development of strategy use in elementary school children: working memory and individual differences.

    PubMed

    Imbo, Ineke; Vandierendonck, André

    2007-04-01

    The current study tested the development of working memory involvement in children's arithmetic strategy selection and strategy efficiency. To this end, an experiment in which the dual-task method and the choice/no-choice method were combined was administered to 10- to 12-year-olds. Working memory was needed in retrieval, transformation, and counting strategies, but the ratio between available working memory resources and arithmetic task demands changed across development. More frequent retrieval use, more efficient memory retrieval, and more efficient counting processes reduced the working memory requirements. Strategy efficiency and strategy selection were also modified by individual differences such as processing speed, arithmetic skill, gender, and math anxiety. Short-term memory capacity, in contrast, was not related to children's strategy selection or strategy efficiency.

  2. Mobile Thread Task Manager

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.

    2013-01-01

    The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.

  3. Physicians’ perceptions of capacity building for managing chronic disease in seniors using integrated interprofessional care models

    PubMed Central

    Lee, Linda; Heckman, George; McKelvie, Robert; Jong, Philip; D’Elia, Teresa; Hillier, Loretta M.

    2015-01-01

    Abstract Objective To explore the barriers to and facilitators of adapting and expanding a primary care memory clinic model to integrate care of additional complex chronic geriatric conditions (heart failure, falls, chronic obstructive pulmonary disease, and frailty) into care processes with the goal of improving outcomes for seniors. Design Mixed-methods study using quantitative (questionnaires) and qualitative (interviews) methods. Setting Ontario. Participants Family physicians currently working in primary care memory clinic teams and supporting geriatric specialists. Methods Family physicians currently working in memory clinic teams (n = 29) and supporting geriatric specialists (n = 9) were recruited as survey participants. Interviews were conducted with memory clinic lead physicians (n = 16). Statistical analysis was done to assess differences between family physician ratings and geriatric specialist ratings related to the capacity for managing complex chronic geriatric conditions, the role of interprofessional collaboration within primary care, and funding and staffing to support geriatric care. Results from both study methods were compared to identify common findings. Main findings Results indicate overall support for expanding the memory clinic model to integrate care for other complex conditions. However, the current primary care structure is challenged to support optimal management of patients with multiple comorbidities, particularly as related to limited funding and staffing resources. Structured training, interprofessional teams, and an active role of geriatric specialists within primary care were identified as important facilitators. Conclusion The memory clinic model, as applied to other complex chronic geriatric conditions, has the potential to build capacity for high-quality primary care, improve health outcomes, promote efficient use of health care resources, and reduce health care costs. PMID:25932482

  4. How phonological awareness mediates the relation between working memory and word reading efficiency in children with dyslexia.

    PubMed

    Knoop-van Campen, Carolien A N; Segers, Eliane; Verhoeven, Ludo

    2018-05-01

    This study examined the relation between working memory, phonological awareness, and word reading efficiency in fourth-grade children with dyslexia. To test whether the relation between phonological awareness and word reading efficiency differed for children with dyslexia versus typically developing children, we assessed phonological awareness and word reading efficiency in 50 children with dyslexia (aged 9;10, 35 boys) and 613 typically developing children (aged 9;5, 279 boys). Phonological awareness was found to be associated with word reading efficiency, similar for children with dyslexia and typically developing children. To find out whether the relation between working memory and word reading efficiency in the group with dyslexia could be explained by phonological awareness, the children with dyslexia were also tested on working memory. Results of a mediation analysis showed a significant indirect effect of working memory on word reading efficiency via phonological awareness. Working memory predicted reading efficiency, via its relation with phonological awareness in children with dyslexia. This indicates that working memory is necessary for word reading efficiency via its impact on phonological awareness and that phonological awareness continues to be important for word reading efficiency in older children with dyslexia. © 2018 The Authors Dyslexia Published by John Wiley & Sons Ltd.

  5. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  6. Time-varying long term memory in the European Union stock markets

    NASA Astrophysics Data System (ADS)

    Sensoy, Ahmet; Tabak, Benjamin M.

    2015-10-01

    This paper proposes a new efficiency index to model time-varying inefficiency in stock markets. We focus on European stock markets and show that they have different degrees of time-varying efficiency. We observe that the 2008 global financial crisis has an adverse effect on almost all EU stock markets. However, the Eurozone sovereign debt crisis has a significant adverse effect only on the markets in France, Spain and Greece. For the late members, joining EU does not have a uniform effect on stock market efficiency. Our results have important implications for policy makers, investors, risk managers and academics.

  7. LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly.

    PubMed

    Bonizzoni, Paola; Vedova, Gianluca Della; Pirola, Yuri; Previtali, Marco; Rizzi, Raffaella

    2016-03-01

    The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows-Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012 ), a state-of-the-art string graph-based assembler, and uses BEETL for indexing the input data. LSG is open source software and is available online. We have analyzed our implementation on a 875-million read whole-genome dataset, on which LSG has built the string graph using only 1GB of main memory (reducing the memory occupation by a factor of 50 with respect to SGA), while requiring slightly more than twice the time than SGA. The analysis of the entire pipeline shows an important decrease in memory usage, while managing to have only a moderate increase in the running time.

  8. Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A.; Chen, Ying-Cheng

    2018-05-01

    Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.

  9. Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency.

    PubMed

    Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A; Chen, Ying-Cheng

    2018-05-04

    Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.

  10. Developing a Physician Management & Leadership Program (PMLP) in Newfoundland and Labrador.

    PubMed

    Maddalena, Victor; Fleet, Lisa

    2015-01-01

    This article aims to document the process the province of Newfoundland and Labrador used to develop an innovative Physician Management and Leadership Program (PMLP). The PMLP is a collaborative initiative among Memorial University (Faculty of Medicine and Faculty of Business), the Government of Newfoundland and Labrador, and the Regional Health Authorities. As challenges facing health-care systems become more complex there is a growing need for management and leadership training for physicians. Memorial University Faculty of Medicine and the Gardiner Centre in the Faculty of Business in partnership with Regional Health Authorities and the Government of Newfoundland and Labrador identified the need for a leadership and management education program for physician leaders. A provincial needs assessment of physician leaders was conducted to identify educational needs to fill this identified gap. A Steering Committee was formed to guide the design and implementation and monitor delivery of the 10 module Physician Management and Leadership Program (PMLP). Designing management and leadership education programs to serve physicians who practice in a large, predominately rural geographic area can be challenging and requires efficient use of available resources and technology. While there are many physician management and leadership programs available in Canada and abroad, the PMLP was designed to meet the specific educational needs of physician leaders in Newfoundland and Labrador.

  11. Highly-efficient quantum memory for polarization qubits in a spatially-multiplexed cold atomic ensemble.

    PubMed

    Vernaz-Gris, Pierre; Huang, Kun; Cao, Mingtao; Sheremet, Alexandra S; Laurat, Julien

    2018-01-25

    Quantum memory for flying optical qubits is a key enabler for a wide range of applications in quantum information. A critical figure of merit is the overall storage and retrieval efficiency. So far, despite the recent achievements of efficient memories for light pulses, the storage of qubits has suffered from limited efficiency. Here we report on a quantum memory for polarization qubits that combines an average conditional fidelity above 99% and efficiency around 68%, thereby demonstrating a reversible qubit mapping where more information is retrieved than lost. The qubits are encoded with weak coherent states at the single-photon level and the memory is based on electromagnetically-induced transparency in an elongated laser-cooled ensemble of cesium atoms, spatially multiplexed for dual-rail storage. This implementation preserves high optical depth on both rails, without compromise between multiplexing and storage efficiency. Our work provides an efficient node for future tests of quantum network functionalities and advanced photonic circuits.

  12. Time: a vital resource.

    PubMed

    Collins, Sandra K; Collins, Kevin S

    2004-01-01

    Resolving problems with time management requires an understanding of the concept of working smarter rather than harder. Therefore, managing time effectively is a vital responsibility of department managers. When developing a plan for more effectively managing time, it is important to carefully analyze where time is currently being used/lost. Keeping a daily log can be a time consuming effort. However, the log can provide information about ways that time may be saved and how to organize personal schedules to maximize time efficiency. The next step is to develop a strategy to decrease wasted time and create a more cohesive radiology department. The following list of time management strategies provides some suggestions for developing a plan. Get focused. Set goals and priorities. Get organized. Monitor individual motivation factors. Develop memory techniques. In healthcare, success means delivering the highest quality of care by getting organized, meeting deadlines, creating efficient schedules and appropriately budgeting resources. Effective time management focuses on knowing what needs to be done when. The managerial challenge is to shift the emphasis from doing everything all at once to orchestrating the departmental activities in order to maximize the time given in a normal workday.

  13. Two-layer symbolic representation for stochastic models with phase-type distributed events

    NASA Astrophysics Data System (ADS)

    Longo, Francesco; Scarpa, Marco

    2015-07-01

    Among the techniques that have been proposed for the analysis of non-Markovian models, the state space expansion approach showed great flexibility in terms of modelling capacities.The principal drawback is the explosion of the state space. This paper proposes a two-layer symbolic method for efficiently storing the expanded reachability graph of a non-Markovian model in the case in which continuous phase-type distributions are associated with the firing times of system events, and different memory policies are considered. At the lower layer, the reachability graph is symbolically represented in the form of a set of Kronecker matrices, while, at the higher layer, all the information needed to correctly manage event memory is stored in a multi-terminal multi-valued decision diagram. Such an information is collected by applying a symbolic algorithm, which is based on a couple of theorems. The efficiency of the proposed approach, in terms of memory occupation and execution time, is shown by applying it to a set of non-Markovian stochastic Petri nets and comparing it with a classical explicit expansion algorithm. Moreover, a comparison with a classical symbolic approach is performed whenever possible.

  14. Simple Atomic Quantum Memory Suitable for Semiconductor Quantum Dot Single Photons

    NASA Astrophysics Data System (ADS)

    Wolters, Janik; Buser, Gianni; Horsley, Andrew; Béguin, Lucas; Jöckel, Andreas; Jahn, Jan-Philipp; Warburton, Richard J.; Treutlein, Philipp

    2017-08-01

    Quantum memories matched to single photon sources will form an important cornerstone of future quantum network technology. We demonstrate such a memory in warm Rb vapor with on-demand storage and retrieval, based on electromagnetically induced transparency. With an acceptance bandwidth of δ f =0.66 GHz , the memory is suitable for single photons emitted by semiconductor quantum dots. In this regime, vapor cell memories offer an excellent compromise between storage efficiency, storage time, noise level, and experimental complexity, and atomic collisions have negligible influence on the optical coherences. Operation of the memory is demonstrated using attenuated laser pulses on the single photon level. For a 50 ns storage time, we measure ηe2 e 50 ns=3.4 (3 )% end-to-end efficiency of the fiber-coupled memory, with a total intrinsic efficiency ηint=17 (3 )%. Straightforward technological improvements can boost the end-to-end-efficiency to ηe 2 e≈35 %; beyond that, increasing the optical depth and exploiting the Zeeman substructure of the atoms will allow such a memory to approach near unity efficiency. In the present memory, the unconditional read-out noise level of 9 ×10-3 photons is dominated by atomic fluorescence, and for input pulses containing on average μ1=0.27 (4 ) photons, the signal to noise level would be unity.

  15. Simple Atomic Quantum Memory Suitable for Semiconductor Quantum Dot Single Photons.

    PubMed

    Wolters, Janik; Buser, Gianni; Horsley, Andrew; Béguin, Lucas; Jöckel, Andreas; Jahn, Jan-Philipp; Warburton, Richard J; Treutlein, Philipp

    2017-08-11

    Quantum memories matched to single photon sources will form an important cornerstone of future quantum network technology. We demonstrate such a memory in warm Rb vapor with on-demand storage and retrieval, based on electromagnetically induced transparency. With an acceptance bandwidth of δf=0.66  GHz, the memory is suitable for single photons emitted by semiconductor quantum dots. In this regime, vapor cell memories offer an excellent compromise between storage efficiency, storage time, noise level, and experimental complexity, and atomic collisions have negligible influence on the optical coherences. Operation of the memory is demonstrated using attenuated laser pulses on the single photon level. For a 50 ns storage time, we measure η_{e2e}^{50  ns}=3.4(3)% end-to-end efficiency of the fiber-coupled memory, with a total intrinsic efficiency η_{int}=17(3)%. Straightforward technological improvements can boost the end-to-end-efficiency to η_{e2e}≈35%; beyond that, increasing the optical depth and exploiting the Zeeman substructure of the atoms will allow such a memory to approach near unity efficiency. In the present memory, the unconditional read-out noise level of 9×10^{-3} photons is dominated by atomic fluorescence, and for input pulses containing on average μ_{1}=0.27(4) photons, the signal to noise level would be unity.

  16. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  17. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  18. Adaptive mesh refinement for characteristic grids

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    2011-05-01

    I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.

  19. SAR processing on the MPP

    NASA Technical Reports Server (NTRS)

    Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.

    1981-01-01

    The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.

  20. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  1. Antiferromagnetic CuMnAs multi-level memory cell with microelectronic compatibility

    NASA Astrophysics Data System (ADS)

    Olejník, K.; Schuler, V.; Marti, X.; Novák, V.; Kašpar, Z.; Wadley, P.; Campion, R. P.; Edmonds, K. W.; Gallagher, B. L.; Garces, J.; Baumgartner, M.; Gambardella, P.; Jungwirth, T.

    2017-05-01

    Antiferromagnets offer a unique combination of properties including the radiation and magnetic field hardness, the absence of stray magnetic fields, and the spin-dynamics frequency scale in terahertz. Recent experiments have demonstrated that relativistic spin-orbit torques can provide the means for an efficient electric control of antiferromagnetic moments. Here we show that elementary-shape memory cells fabricated from a single-layer antiferromagnet CuMnAs deposited on a III-V or Si substrate have deterministic multi-level switching characteristics. They allow for counting and recording thousands of input pulses and responding to pulses of lengths downscaled to hundreds of picoseconds. To demonstrate the compatibility with common microelectronic circuitry, we implemented the antiferromagnetic bit cell in a standard printed circuit board managed and powered at ambient conditions by a computer via a USB interface. Our results open a path towards specialized embedded memory-logic applications and ultra-fast components based on antiferromagnets.

  2. Drainage Basins as Large-Scale Field Laboratories of Change: Hydro-biogeochemical- economic Model Study Support for Water Pollution and Eutrophication Management Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Destouni, G.

    2008-12-01

    Excess nutrient and pollutant releases from various point and diffuse sources at and below the land surface, associated with land use, industry and households, pose serious eutrophication and pollution risks to inland and coastal water ecosystems worldwide. These risks must be assessed, for instance according to the EU Water Framework Directive (WFD). The WFD demands economically efficient, basin-scale water management for achieving and maintaining good physico-chemical and ecological status in all the inland and coastal waters of EU member states. This paper synthesizes a series of hydro-biogeochemical and linked economic efficiency studies of basin-scale waterborne nutrient and pollutant flows, the development over the last decades up to the current levels of these flows, the main monitoring and modelling uncertainties associated with their quantification, and the effectiveness and economic efficiency of different possible abatement strategies for abating them in order to meet WFD requirements and other environmental goals on local, national and international levels under climate and other regional change. The studies include different Swedish and Baltic Sea drainage basins. Main findings include quantification of near-coastal monitoring gaps and long-term nutrient and pollutant memory in the subsurface (soil-groundwater-sediment) water systems of drainage basins. The former may significantly mask nutrient and pollutant loads to the sea while the latter may continue to uphold large loads to inland and coastal waters long time after source mitigation. A methodology is presented for finding a rational trade-off between the two resource-demanding options to reduce, or accept and explicitly account for the uncertainties implied by these monitoring gaps and long-term nutrient-pollution memories and time lags, and other knowledge, data and model uncertainties that limit the effectiveness and efficiency of water pollution and eutrophication management.

  3. Managing Chemotherapy Side Effects: Memory Changes

    MedlinePlus

    ... C ancer I nstitute Managing Chemotherapy Side Effects Memory Changes What is causing these changes? Your doctor ... thinking or remembering things Managing Chemotherapy Side Effects: Memory Changes Get help to remember things. Write down ...

  4. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  5. Extended memory management under RTOS

    NASA Technical Reports Server (NTRS)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  6. CMS event processing multi-core efficiency status

    NASA Astrophysics Data System (ADS)

    Jones, C. D.; CMS Collaboration

    2017-10-01

    In 2015, CMS was the first LHC experiment to begin using a multi-threaded framework for doing event processing. This new framework utilizes Intel’s Thread Building Block library to manage concurrency via a task based processing model. During the 2015 LHC run period, CMS only ran reconstruction jobs using multiple threads because only those jobs were sufficiently thread efficient. Recent work now allows simulation and digitization to be thread efficient. In addition, during 2015 the multi-threaded framework could run events in parallel but could only use one thread per event. Work done in 2016 now allows multiple threads to be used while processing one event. In this presentation we will show how these recent changes have improved CMS’s overall threading and memory efficiency and we will discuss work to be done to further increase those efficiencies.

  7. Improving attention control in dysphoria through cognitive training: transfer effects on working memory capacity and filtering efficiency.

    PubMed

    Owens, Max; Koster, Ernst H W; Derakshan, Nazanin

    2013-03-01

    Impaired filtering of irrelevant information from working memory is thought to underlie reduced working memory capacity for relevant information in dysphoria. The current study investigated whether training-related gains in working memory performance on the adaptive dual n-back task could result in improved inhibitory function. Efficacy of training was monitored in a change detection paradigm allowing measurement of a sustained event-related potential asymmetry sensitive to working memory capacity and the efficient filtering of irrelevant information. Dysphoric participants in the training group showed training-related gains in working memory that were accompanied by gains in working memory capacity and filtering efficiency compared to an active control group. Results provide important initial evidence that behavioral performance and neural function in dysphoria can be improved by facilitating greater attentional control. Copyright © 2013 Society for Psychophysiological Research.

  8. Thermally efficient and highly scalable In2Se3 nanowire phase change memory

    NASA Astrophysics Data System (ADS)

    Jin, Bo; Kang, Daegun; Kim, Jungsik; Meyyappan, M.; Lee, Jeong-Soo

    2013-04-01

    The electrical characteristics of nonvolatile In2Se3 nanowire phase change memory are reported. Size-dependent memory switching behavior was observed in nanowires of varying diameters and the reduction in set/reset threshold voltage was as low as 3.45 V/6.25 V for a 60 nm nanowire, which is promising for highly scalable nanowire memory applications. Also, size-dependent thermal resistance of In2Se3 nanowire memory cells was estimated with values as high as 5.86×1013 and 1.04×106 K/W for a 60 nm nanowire memory cell in amorphous and crystalline phases, respectively. Such high thermal resistances are beneficial for improvement of thermal efficiency and thus reduction in programming power consumption based on Fourier's law. The evaluation of thermal resistance provides an avenue to develop thermally efficient memory cell architecture.

  9. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Keren

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less

  10. On supertaskers and the neural basis of efficient multitasking.

    PubMed

    Medeiros-Ward, Nathan; Watson, Jason M; Strayer, David L

    2015-06-01

    The present study used brain imaging to determine the neural basis of individual differences in multitasking, the ability to successfully perform at least two attention-demanding tasks at once. Multitasking is mentally taxing and, therefore, should recruit the prefrontal cortex to maintain task goals when coordinating attentional control and managing the cognitive load. To investigate this possibility, we used functional neuroimaging to assess neural activity in both extraordinary multitaskers (Supertaskers) and control subjects who were matched on working memory capacity. Participants performed a challenging dual N-back task in which auditory and visual stimuli were presented simultaneously, requiring independent and continuous maintenance, updating, and verification of the contents of verbal and spatial working memory. With the task requirements and considerable cognitive load that accompanied increasing N-back, relative to the controls, the multitasking of Supertaskers was characterized by more efficient recruitment of anterior cingulate and posterior frontopolar prefrontal cortices. Results are interpreted using neuropsychological and evolutionary perspectives on individual differences in multitasking ability and the neural correlates of attentional control.

  11. The benefits of physical activities on cognitive and mental health in healthy and pathological aging.

    PubMed

    Blanchet, Sophie; Chikhi, Samy; Maltais, Désirée

    2018-06-01

    Aging is associated with a decreased efficiency of different cognitive functions as well as in the perceptive, physical and physiological changes. The age-related cognitive decline concerns mainly attention, executive control and episodic memory. Some factors such as being physically active protect against the age-related decline. This review will discuss how physical activity can positively affect the cognitive efficiency and mental health of older healthy individuals, and possibly reduces the risk of progression into dementia, and depression. Underlying neurophysiological mechanisms play an important role for improving attention and episodic memory, which are the most sensitive to the effects of aging. We also present recommendations for the management of physical activity for the prevention of cognitive deficits, and the reduction of depressive symptoms in older persons. Given the benefits of physical activity for the prevention of neurodegenerative disease and the improvement of the well-being, it appears to be an important low cost therapeutic approach that should be integrated into clinical practice.

  12. Does constraining memory maintenance reduce visual search efficiency?

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R

    2018-03-01

    We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.

  13. Network resiliency through memory health monitoring and proactive management

    DOEpatents

    Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-11-21

    A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.

  14. Intrinsic retrieval efficiency for quantum memories: A three-dimensional theory of light interaction with an atomic ensemble

    NASA Astrophysics Data System (ADS)

    Gujarati, Tanvi P.; Wu, Yukai; Duan, Luming

    2018-03-01

    Duan-Lukin-Cirac-Zoller quantum repeater protocol, which was proposed to realize long distance quantum communication, requires usage of quantum memories. Atomic ensembles interacting with optical beams based on off-resonant Raman scattering serve as convenient on-demand quantum memories. Here, a complete free space, three-dimensional theory of the associated read and write process for this quantum memory is worked out with the aim of understanding intrinsic retrieval efficiency. We develop a formalism to calculate the transverse mode structure for the signal and the idler photons and use the formalism to study the intrinsic retrieval efficiency under various configurations. The effects of atomic density fluctuations and atomic motion are incorporated by numerically simulating this system for a range of realistic experimental parameters. We obtain results that describe the variation in the intrinsic retrieval efficiency as a function of the memory storage time for skewed beam configuration at a finite temperature, which provides valuable information for optimization of the retrieval efficiency in experiments.

  15. Cognitive load during route selection increases reliance on spatial heuristics.

    PubMed

    Brunyé, Tad T; Martis, Shaina B; Taylor, Holly A

    2018-05-01

    Planning routes from maps involves perceiving the symbolic environment, identifying alternate routes and applying explicit strategies and implicit heuristics to select an option. Two implicit heuristics have received considerable attention, the southern route preference and initial segment strategy. This study tested a prediction from decision-making theory that increasing cognitive load during route planning will increase reliance on these heuristics. In two experiments, participants planned routes while under conditions of minimal (0-back) or high (2-back) working memory load. In Experiment 1, we examined how memory load impacts the southern route heuristic. In Experiment 2, we examined how memory load impacts the initial segment heuristic. Results replicated earlier results demonstrating a southern route preference (Experiment 1) and initial segment strategy (Experiment 2) and further demonstrated that evidence for heuristic reliance is more likely under conditions of concurrent working memory load. Furthermore, the extent to which participants maintained efficient route selection latencies in the 2-back condition predicted the magnitude of this effect. Together, results demonstrate that working memory load increases the application of heuristics during spatial decision making, particularly when participants attempt to maintain quick decisions while managing concurrent task demands.

  16. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  17. High efficiency coherent optical memory with warm rubidium vapour

    PubMed Central

    Hosseini, M.; Sparkes, B.M.; Campbell, G.; Lam, P.K.; Buchler, B.C.

    2011-01-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory. PMID:21285952

  18. High efficiency coherent optical memory with warm rubidium vapour.

    PubMed

    Hosseini, M; Sparkes, B M; Campbell, G; Lam, P K; Buchler, B C

    2011-02-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory.

  19. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    NASA Astrophysics Data System (ADS)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  20. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  1. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  2. Prospective memory in schizophrenia: relationship to medication management skills, neurocognition, and symptoms in individuals with schizophrenia.

    PubMed

    Raskin, Sarah A; Maye, Jacqueline; Rogers, Alexandra; Correll, David; Zamroziewicz, Marta; Kurtz, Matthew

    2014-05-01

    Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. In this Study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition, and symptoms. Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs.

  3. A model of memory impairment in schizophrenia: cognitive and clinical factors associated with memory efficiency and memory errors.

    PubMed

    Brébion, Gildas; Bressan, Rodrigo A; Ohlsen, Ruth I; David, Anthony S

    2013-12-01

    Memory impairments in patients with schizophrenia have been associated with various cognitive and clinical factors. Hallucinations have been more specifically associated with errors stemming from source monitoring failure. We conducted a broad investigation of verbal memory and visual memory as well as source memory functioning in a sample of patients with schizophrenia. Various memory measures were tallied, and we studied their associations with processing speed, working memory span, and positive, negative, and depressive symptoms. Superficial and deep memory processes were differentially associated with processing speed, working memory span, avolition, depression, and attention disorders. Auditory/verbal and visual hallucinations were differentially associated with specific types of source memory error. We integrated all the results into a revised version of a previously published model of memory functioning in schizophrenia. The model describes the factors that affect memory efficiency, as well as the cognitive underpinnings of hallucinations within the source monitoring framework. © 2013.

  4. Temperature and leakage aware techniques to improve cache reliability

    NASA Astrophysics Data System (ADS)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data stored in the cache to reduce power consumption. The initial work done on this subject focuses on the type of data that increases leakage consumption and ways to manage without impacting the performance of the microprocessor. The second phase of the project focuses on managing the data storage in different blocks of the cache to smooth the leakage power as well as dynamic power consumption. The last technique is a voltage controlled cache to reduce the leakage consumption of the cache while in execution and even in idle state. Two blocks of the 4-way set associative cache go through a voltage regulator before getting to the voltage well, and the other two are directly connected to the voltage well. The idea behind this technique is to use the replacement algorithm information to increase or decrease voltage of the two blocks depending on the need of the information stored on them.

  5. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  6. Hard Real-Time: C++ Versus RTSJ

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Reinholtz, William K.

    2004-01-01

    In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.

  7. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    DTIC Science & Technology

    2015-09-28

    the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java

  8. PIMS: Memristor-Based Processing-in-Memory-and-Storage.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Jeanine

    Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less

  9. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  10. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    PubMed

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  11. Spatial working memory load affects counting but not subitizing in enumeration.

    PubMed

    Shimomura, Tomonari; Kumada, Takatsune

    2011-08-01

    The present study investigated whether subitizing reflects capacity limitations associated with two types of working memory tasks. Under a dual-task situation, participants performed an enumeration task in conjunction with either a spatial (Experiment 1) or a nonspatial visual (Experiment 2) working memory task. Experiment 1 showed that spatial working memory load affected the slope of a counting function but did not affect subitizing performance or subitizing range. Experiment 2 showed that nonspatial visual working memory load affected neither enumeration efficiency nor subitizing range. Furthermore, in both spatial and nonspatial memory tasks, neither subitizing efficiency nor subitizing range was affected by amount of imposed memory load. In all the experiments, working memory load failed to influence slope, subitizing range, or overall reaction time. These findings suggest that subitizing is performed without either spatial or nonspatial working memory. A possible mechanism of subitizing with independent capacity of working memory is discussed.

  12. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  13. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  14. Prospective memory in schizophrenia: Relationship to medication management skills, neurocognition and symptoms in individuals with schizophrenia

    PubMed Central

    Raskin, S.; Maye, J.; Rogers, A.; Correll, D.; Zamroziewicz, M.; Kurtz, M.

    2014-01-01

    Objective Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. Methods In this study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition and symptoms. Results Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. Conclusions This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs. PMID:24188118

  15. Radiation-Hardened Solid-State Drive

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.

    2010-01-01

    A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.

  16. Working memory capacity and redundant information processing efficiency.

    PubMed

    Endres, Michael J; Houpt, Joseph W; Donkin, Chris; Finn, Peter R

    2015-01-01

    Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.

  17. Dynamic Forest: An Efficient Index Structure for NAND Flash Memory

    NASA Astrophysics Data System (ADS)

    Yang, Chul-Woong; Yong Lee, Ki; Ho Kim, Myoung; Lee, Yoon-Joon

    In this paper, we present an efficient index structure for NAND flash memory, called the Dynamic Forest (D-Forest). Since write operations incur high overhead on NAND flash memory, D-Forest is designed to minimize write operations for index updates. The experimental results show that D-Forest significantly reduces write operations compared to the conventional B+-tree.

  18. The effect of nonadiabaticity on the efficiency of quantum memory based on an optical cavity

    NASA Astrophysics Data System (ADS)

    Veselkova, N. G.; Sokolov, I. V.

    2017-07-01

    Quantum efficiency is an important characteristic of quantum memory devices that are aimed at recording the quantum state of light signals and its storing and reading. In the case of memory based on an ensemble of cold atoms placed in an optical cavity, the efficiency is restricted, in particular, by relaxation processes in the system of active atomic levels. We show how the effect of the relaxation on the quantum efficiency can be determined in a regime of the memory usage in which the evolution of signals in time is not arbitrarily slow on the scale of the field lifetime in the cavity and when the frequently used approximation of the adiabatic elimination of the quantized cavity mode field cannot be applied. Taking into account the effect of the nonadiabaticity on the memory quality is of interest in view of the fact that, in order to increase the field-medium coupling parameter, a higher cavity quality factor is required, whereas storing and processing of sequences of many signals in the memory implies that their duration is reduced. We consider the applicability of the well-known efficiency estimates via the system cooperativity parameter and estimate a more general form. In connection with the theoretical description of the memory of the given type, we also discuss qualitative differences in the behavior of a random source introduced into the Heisenberg-Langevin equations for atomic variables in the cases of a large and a small number of atoms.

  19. Collective memory in primate conflict implied by temporal scaling collapse.

    PubMed

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  20. High-performance Raman memory with spatio-temporal reversal

    NASA Astrophysics Data System (ADS)

    Vernaz-Gris, Pierre; Tranter, Aaron D.; Everett, Jesse L.; Leung, Anthony C.; Paul, Karun V.; Campbell, Geoff T.; Lam, Ping Koy; Buchler, Ben C.

    2018-05-01

    A number of techniques exist to use an ensemble of atoms as a quantum memory for light. Many of these propose to use backward retrieval as a way to improve the storage and recall efficiency. We report on a demonstration of an off-resonant Raman memory that uses backward retrieval to achieve an efficiency of $65\\pm6\\%$ at a storage time of one pulse duration. The memory has a characteristic decay time of 60 $\\mu$s, corresponding to a delay-bandwidth product of $160$.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sancho Pitarch, Jose Carlos; Kerbyson, Darren; Lang, Mike

    Increasing the core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend to cope with this challenge is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the effectiveness of this approach on the performance of parallel scientific applications. Specifically, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers. Experiments conducted on two current state-of-the-art multicore processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, for amore » wide range of production applications shows that there is a diminishing return when increasing the number of memory channels per memory controller. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers with each with one channel compared with one controller with two memory channels.« less

  2. k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.

    PubMed

    Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis

    2015-06-01

    Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.

  3. A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald D.; Bailey, David H.

    1987-01-01

    A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.

  4. Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Llames, Rene Lim

    1991-01-01

    Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.

  5. Multi-core processing and scheduling performance in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J. M.; Evans, D.; Foulkes, S.

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less

  6. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  7. Brain reserve and cognitive reserve protect against cognitive decline over 4.5 years in MS

    PubMed Central

    Rocca, Maria A.; Leavitt, Victoria M.; Dackovic, Jelena; Mesaros, Sarlota; Drulovic, Jelena; DeLuca, John; Filippi, Massimo

    2014-01-01

    Objective: Based on the theories of brain reserve and cognitive reserve, we investigated whether larger maximal lifetime brain growth (MLBG) and/or greater lifetime intellectual enrichment protect against cognitive decline over time. Methods: Forty patients with multiple sclerosis (MS) underwent baseline and 4.5-year follow-up evaluations of cognitive efficiency (Symbol Digit Modalities Test, Paced Auditory Serial Addition Task) and memory (Selective Reminding Test, Spatial Recall Test). Baseline and follow-up MRIs quantified disease progression: percentage brain volume change (cerebral atrophy), percentage change in T2 lesion volume. MLBG (brain reserve) was estimated with intracranial volume; intellectual enrichment (cognitive reserve) was estimated with vocabulary. We performed repeated-measures analyses of covariance to investigate whether larger MLBG and/or greater intellectual enrichment moderate/attenuate cognitive decline over time, controlling for disease progression. Results: Patients with MS declined in cognitive efficiency and memory (p < 0.001). MLBG moderated decline in cognitive efficiency (p = 0.031, ηp2 = 0.122), with larger MLBG protecting against decline. MLBG did not moderate memory decline (p = 0.234, ηp2 = 0.039). Intellectual enrichment moderated decline in cognitive efficiency (p = 0.031, ηp2 = 0.126) and memory (p = 0.037, ηp2 = 0.115), with greater intellectual enrichment protecting against decline. MS disease progression was more negatively associated with change in cognitive efficiency and memory among patients with lower vs higher MLBG and intellectual enrichment. Conclusion: We provide longitudinal support for theories of brain reserve and cognitive reserve in MS. Larger MLBG protects against decline in cognitive efficiency, and greater intellectual enrichment protects against decline in cognitive efficiency and memory. Consideration of these protective factors should improve prediction of future cognitive decline in patients with MS. PMID:24748670

  8. Efficient entanglement distillation without quantum memory.

    PubMed

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman

    2016-05-31

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.

  9. Efficient entanglement distillation without quantum memory

    PubMed Central

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman

    2016-01-01

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946

  10. Asymmetric soft-error resistant memory

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)

    1991-01-01

    A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.

  11. Cognitive Rehabilitation of Episodic Memory Disorders: From Theory to Practice

    PubMed Central

    Ptak, Radek; der Linden, Martial Van; Schnider, Armin

    2010-01-01

    Memory disorders are among the most frequent and most debilitating cognitive impairments following acquired brain damage. Cognitive remediation strategies attempt to restore lost memory capacity, provide compensatory techniques or teach the use of external memory aids. Memory rehabilitation has strongly been influenced by memory theory, and the interaction between both has stimulated the development of techniques such as spaced retrieval, vanishing cues or errorless learning. These techniques partly rely on implicit memory and therefore enable even patients with dense amnesia to acquire new information. However, knowledge acquired in this way is often strongly domain-specific and inflexible. In addition, individual patients with amnesia respond differently to distinct interventions. The factors underlying these differences have not yet been identified. Behavioral management of memory failures therefore often relies on a careful description of environmental factors and measurement of associated behavioral disorders such as unawareness of memory failures. The current evidence suggests that patients with less severe disorders benefit from self-management techniques and mnemonics whereas rehabilitation of severely amnesic patients should focus on behavior management, the transmission of domain-specific knowledge through implicit memory processes and the compensation for memory deficits with memory aids. PMID:20700383

  12. Method and apparatus for managing access to a memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less

  13. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  14. Is less really more: Does a prefrontal efficiency genotype actually confer better performance when working memory becomes difficult?

    PubMed

    Ihne, Jessica L; Gallagher, Natalie M; Sullivan, Marie; Callicott, Joseph H; Green, Adam E

    2016-01-01

    Perhaps the most widely studied effect to emerge from the combination of neuroimaging and human genetics is the association of the COMT-Val(108/158)Met polymorphism with prefrontal activity during working memory. COMT-Val is a putative risk factor in schizophrenia, which is characterized by disordered prefrontal function. Work in healthy populations has sought to characterize mechanisms by which the valine (Val) allele may lead to disadvantaged prefrontal cognition. Lower activity in methionine (Met) carriers has been interpreted as advantageous neural efficiency. Notably, however, studies reporting COMT effects on neural efficiency have generally not reported working memory performance effects. Those studies have employed relatively low/easy working memory loads. Higher loads are known to elicit individual differences in working memory performance that are not visible at lower loads. If COMT-Met confers greater neural efficiency when working memory is easy, a reasonable prediction is that Met carriers will be better able to cope with increasing demand for neural resources when working memory becomes difficult. To our knowledge, this prediction has thus far gone untested. Here, we tested performance on three working memory tasks. Performance on each task was measured at multiple levels of load/difficulty, including loads more demanding than those used in prior studies. We found no genotype-by-load interactions or main effects of COMT genotype on accuracy or reaction time. Indeed, even testing for performance differences at each load of each task failed to find a single significant effect of COMT genotype. Thus, even if COMT genotype has the effects on prefrontal efficiency that prior work has suggested, such effects may not directly impact high-load working memory ability. The present findings accord with previous evidence that behavioral effects of COMT are small or nonexistent and, more broadly, with a growing consensus that substantial effects on phenotype will not emerge from candidate gene studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Efficiency of Energy Harvesting in Ni-Mn-Ga Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Lindquist, Paul; Hobza, Tony; Patrick, Charles; Müllner, Peter

    2018-03-01

    Many researchers have reported on the voltage and power generated while energy harvesting using Ni-Mn-Ga shape memory alloys; few researchers report on the power conversion efficiency of energy harvesting. We measured the magneto-mechanical behavior and energy harvesting of Ni-Mn-Ga shape memory alloys to quantify the efficiency of energy harvesting using the inverse magneto-plastic effect. At low frequencies, less than 150 Hz, the power conversion efficiency is less than 0.1%. Power conversion efficiency increases with (i) increasing actuation frequency, (ii) increasing actuation stroke, and (iii) decreasing twinning stress. Extrapolating the results of low-frequency experiments to the kHz actuation regime yields a power conversion factor of about 20% for 3 kHz actuation frequency, 7% actuation strain, and 0.05 MPa twinning stress.

  17. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  18. Architectural Techniques For Managing Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    As chip power dissipation becomes a critical challenge in scaling processor performance, computer architects are forced to fundamentally rethink the design of modern processors and hence, the chip-design industry is now at a major inflection point in its hardware roadmap. The high leakage power and low density of SRAM poses serious obstacles in its use for designing large on-chip caches and for this reason, researchers are exploring non-volatile memory (NVM) devices, such as spin torque transfer RAM, phase change RAM and resistive RAM. However, since NVMs are not strictly superior to SRAM, effective architectural techniques are required for making themmore » a universal memory solution. This book discusses techniques for designing processor caches using NVM devices. It presents algorithms and architectures for improving their energy efficiency, performance and lifetime. It also provides both qualitative and quantitative evaluation to help the reader gain insights and motivate them to explore further. This book will be highly useful for beginners as well as veterans in computer architecture, chip designers, product managers and technical marketing professionals.« less

  19. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  20. From network heterogeneities to familiarity detection and hippocampal memory management

    PubMed Central

    Wang, Jane X.; Poe, Gina; Zochowski, Michal

    2009-01-01

    Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453

  1. Memory Efficient Sequence Analysis Using Compressed Data Structures (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Simpson, Jared

    2018-01-24

    Wellcome Trust Sanger Institute's Jared Simpson on Memory efficient sequence analysis using compressed data structures at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  2. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    PubMed

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.

  3. Coherent spin control of a nanocavity-enhanced qubit in diamond

    DOE PAGES

    Li, Luozhou; Lu, Ming; Schroder, Tim; ...

    2015-01-28

    A central aim of quantum information processing is the efficient entanglement of multiple stationary quantum memories via photons. Among solid-state systems, the nitrogen-vacancy centre in diamond has emerged as an excellent optically addressable memory with second-scale electron spin coherence times. Recently, quantum entanglement and teleportation have been shown between two nitrogen-vacancy memories, but scaling to larger networks requires more efficient spin-photon interfaces such as optical resonators. Here we report such nitrogen-vacancy nanocavity systems in strong Purcell regime with optical quality factors approaching 10,000 and electron spin coherence times exceeding 200 µs using a silicon hard-mask fabrication process. This spin-photon interfacemore » is integrated with on-chip microwave striplines for coherent spin control, providing an efficient quantum memory for quantum networks.« less

  4. Gender differences in navigational memory: pilots vs. nonpilots.

    PubMed

    Verde, Paola; Piccardi, Laura; Bianchini, Filippo; Guariglia, Cecilia; Carrozzo, Paolo; Morgagni, Fabio; Boccia, Maddalena; Di Fiore, Giacomo; Tomao, Enrico

    2015-02-01

    The coding of space as near and far is not only determined by arm-reaching distance, but is also dependent on how the brain represents the extension of the body space. Recent reports suggest that the dissociation between reaching and navigational space is not limited to perception and action but also involves memory systems. It has been reported that gender differences emerged only in adverse learning conditions that required strong spatial ability. In this study we investigated navigational versus reaching memory in air force pilots and a control group without flight experience. We took into account temporal duration (working memory and long-term memory) and focused on working memory, which is considered critical in the gender differences literature. We found no gender effects or flight hour effects in pilots but observed gender effects in working memory (but not in learning and delayed recall) in the nonpilot population (Women's mean = 5.33; SD= 0.90; Men's mean = 5.54; SD= 0.90). We also observed a difference between pilots and nonpilots in the maintenance of on-line reaching information: pilots (mean = 5.85; SD=0.76) were more efficient than nonpilots (mean = 5.21; SD=0.83) and managed this type of information similarly to that concerning navigational space. In the navigational learning phase they also showed better navigational memory (mean = 137.83; SD=5.81) than nonpilots (mean = 126.96; SD=15.81) and were significantly more proficient than the latter group. There is no gender difference in a population of pilots in terms of navigational abilities, while it emerges in a control group without flight experience. We found also that pilots performed better than nonpilots. This study suggests that once selected, male and female pilots do not differ from each other in visuo-spatial abilities and spatial navigation.

  5. Surveillance and Outbreak Response Management System (SORMAS) to support the control of the Ebola virus disease outbreak in West Africa.

    PubMed

    Fähnrich, C; Denecke, K; Adeoye, O O; Benzler, J; Claus, H; Kirchner, G; Mall, S; Richter, R; Schapranow, M P; Schwarz, N; Tom-Aba, D; Uflacker, M; Poggensee, G; Krause, G

    2015-03-26

    In the context of controlling the current outbreak of Ebola virus disease (EVD), the World Health Organization claimed that 'critical determinant of epidemic size appears to be the speed of implementation of rigorous control measures', i.e. immediate follow-up of contact persons during 21 days after exposure, isolation and treatment of cases, decontamination, and safe burials. We developed the Surveillance and Outbreak Response Management System (SORMAS) to improve efficiency and timeliness of these measures. We used the Design Thinking methodology to systematically analyse experiences from field workers and the Ebola Emergency Operations Centre (EOC) after successful control of the EVD outbreak in Nigeria. We developed a process model with seven personas representing the procedures of EVD outbreak control. The SORMAS system architecture combines latest In-Memory Database (IMDB) technology via SAP HANA (in-memory, relational database management system), enabling interactive data analyses, and established SAP cloud tools, such as SAP Afaria (a mobile device management software). The user interface consists of specific front-ends for smartphones and tablet devices, which are independent from physical configurations. SORMAS allows real-time, bidirectional information exchange between field workers and the EOC, ensures supervision of contact follow-up, automated status reports, and GPS tracking. SORMAS may become a platform for outbreak management and improved routine surveillance of any infectious disease. Furthermore, the SORMAS process model may serve as framework for EVD outbreak modeling.

  6. Efficient and flexible memory architecture to alleviate data and context bandwidth bottlenecks of coarse-grained reconfigurable arrays

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun

    2014-12-01

    The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly improved. As a result of using PCC and HDM, experiments running mainstream video decoding programs achieved performance improvements of 13.57%-19.48% when there was a reasonable memory size. Therefore, 1080p@35.7fps for H.264 high profile video decoding can be achieved on PCC and HDM architecture when utilizing a 200 MHz working frequency. Further, the size of the on-chip context memory no longer restricted complex applications, which were efficiently executed on the PCC and HDM architecture.

  7. Some comments on Hurst exponent and the long memory processes on capital markets

    NASA Astrophysics Data System (ADS)

    Sánchez Granero, M. A.; Trinidad Segovia, J. E.; García Pérez, J.

    2008-09-01

    The analysis of long memory processes in capital markets has been one of the topics in finance, since the existence of the market memory could implicate the rejection of an efficient market hypothesis. The study of these processes in finance is realized through Hurst exponent and the most classical method applied is R/S analysis. In this paper we will discuss the efficiency of this methodology as well as some of its more important modifications to detect the long memory. We also propose the application of a classical geometrical method with short modifications and we compare both approaches.

  8. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  9. Motor Action and Emotional Memory

    ERIC Educational Resources Information Center

    Casasanto, Daniel; Dijkstra, Katinka

    2010-01-01

    Can simple motor actions affect how efficiently people retrieve emotional memories, and influence what they choose to remember? In Experiment 1, participants were prompted to retell autobiographical memories with either positive or negative valence, while moving marbles either upward or downward. They retrieved memories faster when the direction…

  10. An extended continuum model considering optimal velocity change with memory and numerical tests

    NASA Astrophysics Data System (ADS)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  11. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, R; Stolken, J; Jannetti, C

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numericalmore » simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.« less

  12. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    PubMed

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  13. GPU-Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    NASA Astrophysics Data System (ADS)

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  14. Successful Training of Filtering Mechanisms in Multiple Object Tracking Does Not Transfer to Filtering Mechanisms in a Visual Working Memory Task: Behavioral and Electrophysiological Evidence

    ERIC Educational Resources Information Center

    Arend, Anna M.; Zimmer, Hubert D.

    2012-01-01

    In this training study, we aimed to selectively train participants' filtering mechanisms to enhance visual working memory (WM) efficiency. The highly restricted nature of visual WM capacity renders efficient filtering mechanisms crucial for its successful functioning. Filtering efficiency in visual WM can be measured via the lateralized change…

  15. Light storage in a cold atomic ensemble with a high optical depth

    NASA Astrophysics Data System (ADS)

    Park, Kwang-Kyoon; Chough, Young-Tak; Kim, Yoon-Ho

    2017-06-01

    A quantum memory with a high storage efficiency and a long coherence time is an essential element in quantum information applications. Here, we report our recent development of an optical quantum memory with a rubidium-87 cold atom ensemble. By increasing the optical depth of the medium, we have achieved a storage efficiency of 65% and a coherence time of 51 μs for a weak laser pulse. The result of a numerical analysis based on the Maxwell-Bloch equations agrees well with the experimental results. Our result paves the way toward an efficient optical quantum memory and may find applications in photonic quantum information processing.

  16. The effects of two types of sleep deprivation on visual working memory capacity and filtering efficiency.

    PubMed

    Drummond, Sean P A; Anderson, Dane E; Straus, Laura D; Vogel, Edward K; Perez, Veronica B

    2012-01-01

    Sleep deprivation has adverse consequences for a variety of cognitive functions. The exact effects of sleep deprivation, though, are dependent upon the cognitive process examined. Within working memory, for example, some component processes are more vulnerable to sleep deprivation than others. Additionally, the differential impacts on cognition of different types of sleep deprivation have not been well studied. The aim of this study was to examine the effects of one night of total sleep deprivation and 4 nights of partial sleep deprivation (4 hours in bed/night) on two components of visual working memory: capacity and filtering efficiency. Forty-four healthy young adults were randomly assigned to one of the two sleep deprivation conditions. All participants were studied: 1) in a well-rested condition (following 6 nights of 9 hours in bed/night); and 2) following sleep deprivation, in a counter-balanced order. Visual working memory testing consisted of two related tasks. The first measured visual working memory capacity and the second measured the ability to ignore distractor stimuli in a visual scene (filtering efficiency). Results showed neither type of sleep deprivation reduced visual working memory capacity. Partial sleep deprivation also generally did not change filtering efficiency. Total sleep deprivation, on the other hand, did impair performance in the filtering task. These results suggest components of visual working memory are differentially vulnerable to the effects of sleep deprivation, and different types of sleep deprivation impact visual working memory to different degrees. Such findings have implications for operational settings where individuals may need to perform with inadequate sleep and whose jobs involve receiving an array of visual information and discriminating the relevant from the irrelevant prior to making decisions or taking actions (e.g., baggage screeners, air traffic controllers, military personnel, health care providers).

  17. Anatomical Coupling between Distinct Metacognitive Systems for Memory and Visual Perception

    PubMed Central

    McCurdy, Li Yan; Maniscalco, Brian; Metcalfe, Janet; Liu, Ka Yuet; de Lange, Floris P.; Lau, Hakwan

    2015-01-01

    A recent study found that, across individuals, gray matter volume in the frontal polar region was correlated with visual metacognition capacity (i.e., how well one’s confidence ratings distinguish between correct and incorrect judgments). A question arises as to whether the putative metacognitive mechanisms in this region are also used in other metacognitive tasks involving, for example, memory. A novel psychophysical measure allowed us to assess metacognitive efficiency separately in a visual and a memory task, while taking variations in basic task performance capacity into account. We found that, across individuals, metacognitive efficiencies positively correlated between the two tasks. However, voxel-based morphometry analysis revealed distinct brain structures for the two kinds of metacognition. Replicating a previous finding, variation in visual metacognitive efficiency was correlated with volume of frontal polar regions. However, variation in memory metacognitive efficiency was correlated with volume of the precuneus. There was also a weak correlation between visual metacognitive efficiency and precuneus volume, which may account for the behavioral correlation between visual and memory metacognition (i.e., the precuneus may contain common mechanisms for both types of metacognition). However, we also found that gray matter volumes of the frontal polar and precuneus regions themselves correlated across individuals, and a formal model comparison analysis suggested that this structural covariation was sufficient to account for the behavioral correlation of metacognition in the two tasks. These results highlight the importance of the precuneus in higher-order memory processing and suggest that there may be functionally distinct metacognitive systems in the human brain. PMID:23365229

  18. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  19. Disk-based k-mer counting on a PC

    PubMed Central

    2013-01-01

    Background The k-mer counting problem, which is to build the histogram of occurrences of every k-symbol long substring in a given text, is important for many bioinformatics applications. They include developing de Bruijn graph genome assemblers, fast multiple sequence alignment and repeat detection. Results We propose a simple, yet efficient, parallel disk-based algorithm for counting k-mers. Experiments show that it usually offers the fastest solution to the considered problem, while demanding a relatively small amount of memory. In particular, it is capable of counting the statistics for short-read human genome data, in input gzipped FASTQ file, in less than 40 minutes on a PC with 16 GB of RAM and 6 CPU cores, and for long-read human genome data in less than 70 minutes. On a more powerful machine, using 32 GB of RAM and 32 CPU cores, the tasks are accomplished in less than half the time. No other algorithm for most tested settings of this problem and mammalian-size data can accomplish this task in comparable time. Our solution also belongs to memory-frugal ones; most competitive algorithms cannot efficiently work on a PC with 16 GB of memory for such massive data. Conclusions By making use of cheap disk space and exploiting CPU and I/O parallelism we propose a very competitive k-mer counting procedure, called KMC. Our results suggest that judicious resource management may allow to solve at least some bioinformatics problems with massive data on a commodity personal computer. PMID:23679007

  20. Targeted Memory Reactivation during Sleep Adaptively Promotes the Strengthening or Weakening of Overlapping Memories.

    PubMed

    Oyarzún, Javiera P; Morís, Joaquín; Luque, David; de Diego-Balaguer, Ruth; Fuentemilla, Lluís

    2017-08-09

    System memory consolidation is conceptualized as an active process whereby newly encoded memory representations are strengthened through selective memory reactivation during sleep. However, our learning experience is highly overlapping in content (i.e., shares common elements), and memories of these events are organized in an intricate network of overlapping associated events. It remains to be explored whether and how selective memory reactivation during sleep has an impact on these overlapping memories acquired during awake time. Here, we test in a group of adult women and men the prediction that selective memory reactivation during sleep entails the reactivation of associated events and that this may lead the brain to adaptively regulate whether these associated memories are strengthened or pruned from memory networks on the basis of their relative associative strength with the shared element. Our findings demonstrate the existence of efficient regulatory neural mechanisms governing how complex memory networks are shaped during sleep as a function of their associative memory strength. SIGNIFICANCE STATEMENT Numerous studies have demonstrated that system memory consolidation is an active, selective, and sleep-dependent process in which only subsets of new memories become stabilized through their reactivation. However, the learning experience is highly overlapping in content and thus events are encoded in an intricate network of related memories. It remains to be explored whether and how memory reactivation has an impact on overlapping memories acquired during awake time. Here, we show that sleep memory reactivation promotes strengthening and weakening of overlapping memories based on their associative memory strength. These results suggest the existence of an efficient regulatory neural mechanism that avoids the formation of cluttered memory representation of multiple events and promotes stabilization of complex memory networks. Copyright © 2017 the authors 0270-6474/17/377748-11$15.00/0.

  1. A processing architecture for associative short-term memory in electronic noses

    NASA Astrophysics Data System (ADS)

    Pioggia, G.; Ferro, M.; Di Francesco, F.; DeRossi, D.

    2006-11-01

    Electronic nose (e-nose) architectures usually consist of several modules that process various tasks such as control, data acquisition, data filtering, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, and fuzzy rules used to implement such tasks may lead to issues concerning module interconnection and cooperation. Moreover, a new learning phase is mandatory once new measurements have been added to the dataset, thus causing changes in the previously derived model. Consequently, if a loss in the previous learning occurs (catastrophic interference), real-time applications of e-noses are limited. To overcome these problems this paper presents an architecture for dynamic and efficient management of multi-transducer data processing techniques and for saving an associative short-term memory of the previously learned model. The architecture implements an artificial model of a hippocampus-based working memory, enabling the system to be ready for real-time applications. Starting from the base models available in the architecture core, dedicated models for neurons, maps and connections were tailored to an artificial olfactory system devoted to analysing olive oil. In order to verify the ability of the processing architecture in associative and short-term memory, a paired-associate learning test was applied. The avoidance of catastrophic interference was observed.

  2. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  3. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  4. Memory for conversation and the development of common ground.

    PubMed

    McKinley, Geoffrey L; Brown-Schmidt, Sarah; Benjamin, Aaron S

    2017-11-01

    Efficient conversation is guided by the mutual knowledge, or common ground, that interlocutors form as a conversation progresses. Characterized from the perspective of commonly used measures of memory, efficient conversation should be closely associated with item memory-what was said-and context memory-who said what to whom. However, few studies have explicitly probed memory to evaluate what type of information is maintained following a communicative exchange. The current study examined how item and context memory relate to the development of common ground over the course of a conversation, and how these forms of memory vary as a function of one's role in a conversation as speaker or listener. The process of developing common ground was positively related to both item and context memory. In addition, content that was spoken was remembered better than content that was heard. Our findings illustrate how memory assessments can complement language measures by revealing the impact that basic conversational processes have on memory for what has been discussed. By taking this approach, we show that not only does the process of forming common ground facilitate communication in the present, but it also promotes an enduring record of that event, facilitating conversation into the future.

  5. Optical storage with electromagnetically induced transparency in cold atoms at a high optical depth

    NASA Astrophysics Data System (ADS)

    Zhang, Shanchao; Zhou, Shuyu; Liu, Chang; Chen, J. F.; Wen, Jianming; Loy, M. M. T.; Wong, G. K. L.; Du, Shengwang

    2012-06-01

    We report experimental demonstration of efficient optical storage with electromagnetically induced transparency (EIT) in a dense cold ^85Rb atomic ensemble trapped in a two-dimensional magneto-optical trap. By varying the optical depth (OD) from 0 to 140, we observe that the optimal storage efficiency for coherent optical pulses has a saturation value of 50% as OD > 50. Our result is consistent with that obtained from hot vapor cell experiments which suggest that a four-wave mixing nonlinear process degrades the EIT storage coherence and efficiency. We apply this EIT quantum memory for narrow-band single photons with controllable waveforms, and obtain an optimal storage efficiency of 49±3% for single-photon wave packets. This is the highest single-photon storage efficiency reported up to today and brings the EIT atomic quantum memory close to practical application because an efficiency of above 50% is necessary to operate the memory within non-cloning regime and beat the classical limit.

  6. Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File

    DTIC Science & Technology

    2008-12-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across

  7. Operating systems and network protocols for wireless sensor networks.

    PubMed

    Dutta, Prabal; Dunkels, Adam

    2012-01-13

    Sensor network protocols exist to satisfy the communication needs of diverse applications, including data collection, event detection, target tracking and control. Network protocols to enable these services are constrained by the extreme resource scarcity of sensor nodes-including energy, computing, communications and storage-which must be carefully managed and multiplexed by the operating system. These challenges have led to new protocols and operating systems that are efficient in their energy consumption, careful in their computational needs and miserly in their memory footprints, all while discovering neighbours, forming networks, delivering data and correcting failures.

  8. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  9. High Storage Efficiency and Large Fractional Delay of EIT-Based Memory

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite

    2013-05-01

    In long-distance quantum communication and optical quantum computation, an efficient and long-lived quantum memory is an important component. We first experimentally demonstrated that a time-space-reversing method plus the optimum pulse shape can improve the storage efficiency (SE) of light pulses to 78% in cold media based on the effect of electromagnetically induced transparency (EIT). We obtain a large fractional delay of 74 at 50% SE, which is the best record so far. The measured classical fidelity of the recalled pulse is higher than 90% and nearly independent of the storage time, implying that the optical memory maintains excellent phase coherence. Our results suggest the current result may be readily applied to single-photon quantum states due to quantum nature of the EIT light-matter inference. This study advances the EIT-based quantum memory in practical quantum information applications.

  10. A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan

    2011-01-01

    Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less

  11. Seizure Control and Memory Impairment Are Related to Disrupted Brain Functional Integration in Temporal Lobe Epilepsy.

    PubMed

    Park, Chang-Hyun; Choi, Yun Seo; Jung, A-Reum; Chung, Hwa-Kyoung; Kim, Hyeon Jin; Yoo, Jeong Hyun; Lee, Hyang Woon

    2017-01-01

    Brain functional integration can be disrupted in patients with temporal lobe epilepsy (TLE), but the clinical relevance of this disruption is not completely understood. The authors hypothesized that disrupted functional integration over brain regions remote from, as well as adjacent to, the seizure focus could be related to clinical severity in terms of seizure control and memory impairment. Using resting-state functional MRI data acquired from 48 TLE patients and 45 healthy controls, the authors mapped functional brain networks and assessed changes in a network parameter of brain functional integration, efficiency, to examine the distribution of disrupted functional integration within and between brain regions. The authors assessed whether the extent of altered efficiency was influenced by seizure control status and whether the degree of altered efficiency was associated with the severity of memory impairment. Alterations in the efficiency were observed primarily near the subcortical region ipsilateral to the seizure focus in TLE patients. The extent of regional involvement was greater in patients with poor seizure control: it reached the frontal, temporal, occipital, and insular cortices in TLE patients with poor seizure control, whereas it was limited to the limbic and parietal cortices in TLE patients with good seizure control. Furthermore, TLE patients with poor seizure control experienced more severe memory impairment, and this was associated with lower efficiency in the brain regions with altered efficiency. These findings indicate that the distribution of disrupted brain functional integration is clinically relevant, as it is associated with seizure control status and comorbid memory impairment.

  12. Set processing in a network environment. [data bases and magnetic disks and tapes

    NASA Technical Reports Server (NTRS)

    Hardgrave, W. T.

    1975-01-01

    A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.

  13. Route selection by rats and humans in a navigational traveling salesman problem.

    PubMed

    Blaser, Rachel E; Ginchansky, Rachel R

    2012-03-01

    Spatial cognition is typically examined in non-human animals from the perspective of learning and memory. For this reason, spatial tasks are often constrained by the time necessary for training or the capacity of the animal's short-term memory. A spatial task with limited learning and memory demands could allow for more efficient study of some aspects of spatial cognition. The traveling salesman problem (TSP), used to study human visuospatial problem solving, is a simple task with modifiable learning and memory requirements. In the current study, humans and rats were characterized in a navigational version of the TSP. Subjects visited each of 10 baited targets in any sequence from a set starting location. Unlike similar experiments, the roles of learning and memory were purposely minimized; all targets were perceptually available, no distracters were used, and each configuration was tested only once. The task yielded a variety of behavioral measures, including target revisits and omissions, route length, and frequency of transitions between each pair of targets. Both humans and rats consistently chose routes that were more efficient than chance, but less efficient than optimal, and generally less efficient than routes produced by the nearest-neighbor strategy. We conclude that the TSP is a useful and flexible task for the study of spatial cognition in human and non-human animals.

  14. Representational Constraints on the Development of Memory and Metamemory: A Developmental-Representational Theory

    ERIC Educational Resources Information Center

    Ceci, Stephen J.; Fitneva, Stanka A.; Williams, Wendy M.

    2010-01-01

    Traditional accounts of memory development suggest that maturation of prefrontal cortex (PFC) enables efficient metamemory, which enhances memory. An alternative theory is described, in which changes in early memory and metamemory are mediated by representational changes, independent of PFC maturation. In a pilot study and Experiment 1, younger…

  15. [Serotonin receptor (5-HTR2A) and dysbindin (DTNBP1) genes and component process variables of short-term verbal memory in schizophrenia].

    PubMed

    Alfimova, M V; Monakhov, M V; Abramova, L I; Golubev, S A; Golimbet, V E

    2009-01-01

    An association study of variations in the DTNBP1 (P1763 and P1578) and 5-HTR2A (T102C and A-1438G) genes with short-term verbal memory efficiency and its component process variables was carried out in 405 patients with schizophrenia and 290 healthy controls. All subjects were asked to recall immediately two sets of 10 words. Total recall, List 1 recall, immediate recall or attention span, proactive interference and a number of intrusions were measured. Patients significantly differed from controls by all memory variables. The efficiency of test performance, efficiency of immediate memory, effect of proactive interference as well as number of intrusions were decreased in the group of patients. Both 5-HTR2A polymorphisms were associated with short-term verbal memory efficiency in the combined sample, with the worst performance observed in carriers of homozygous CC (T102C) and GG (A-1438G) genotypes. The significant effect of the P1763 (DTNBP1) marker on the component process variables (proactive interference and intrusions) was found while its effect on the total recall was non-significant. The homozygotes for GG (P1763) had the worst scores. Overall, the data obtained are in line with the conception of DTNBP1 and 5-HTR2A involvement in different component process variables of memory in healthy subjects and patients with schizophrenia.

  16. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  17. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  18. Brain oscillatory substrates of visual short-term memory capacity.

    PubMed

    Sauseng, Paul; Klimesch, Wolfgang; Heise, Kirstin F; Gruber, Walter R; Holz, Elisa; Karim, Ahmed A; Glennon, Mark; Gerloff, Christian; Birbaumer, Niels; Hummel, Friedhelm C

    2009-11-17

    The amount of information that can be stored in visual short-term memory is strictly limited to about four items. Therefore, memory capacity relies not only on the successful retention of relevant information but also on efficient suppression of distracting information, visual attention, and executive functions. However, completely separable neural signatures for these memory capacity-limiting factors remain to be identified. Because of its functional diversity, oscillatory brain activity may offer a utile solution. In the present study, we show that capacity-determining mechanisms, namely retention of relevant information and suppression of distracting information, are based on neural substrates independent of each other: the successful maintenance of relevant material in short-term memory is associated with cross-frequency phase synchronization between theta (rhythmical neural activity around 5 Hz) and gamma (> 50 Hz) oscillations at posterior parietal recording sites. On the other hand, electroencephalographic alpha activity (around 10 Hz) predicts memory capacity based on efficient suppression of irrelevant information in short-term memory. Moreover, repetitive transcranial magnetic stimulation at alpha frequency can modulate short-term memory capacity by influencing the ability to suppress distracting information. Taken together, the current study provides evidence for a double dissociation of brain oscillatory correlates of visual short-term memory capacity.

  19. Orchid: a novel management, annotation and machine learning framework for analyzing cancer mutations.

    PubMed

    Cario, Clinton L; Witte, John S

    2018-03-15

    As whole-genome tumor sequence and biological annotation datasets grow in size, number and content, there is an increasing basic science and clinical need for efficient and accurate data management and analysis software. With the emergence of increasingly sophisticated data stores, execution environments and machine learning algorithms, there is also a need for the integration of functionality across frameworks. We present orchid, a python based software package for the management, annotation and machine learning of cancer mutations. Building on technologies of parallel workflow execution, in-memory database storage and machine learning analytics, orchid efficiently handles millions of mutations and hundreds of features in an easy-to-use manner. We describe the implementation of orchid and demonstrate its ability to distinguish tissue of origin in 12 tumor types based on 339 features using a random forest classifier. Orchid and our annotated tumor mutation database are freely available at https://github.com/wittelab/orchid. Software is implemented in python 2.7, and makes use of MySQL or MemSQL databases. Groovy 2.4.5 is optionally required for parallel workflow execution. JWitte@ucsf.edu. Supplementary data are available at Bioinformatics online.

  20. A univariate model of river water nitrate time series

    NASA Astrophysics Data System (ADS)

    Worrall, F.; Burt, T. P.

    1999-01-01

    Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.

  1. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.

  2. Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm.

    PubMed

    Attar, Nada; Schneps, Matthew H; Pomplun, Marc

    2016-10-01

    An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.

  3. Relationships between self-reported sleep quality components and cognitive functioning in breast cancer survivors up to 10 years following chemotherapy.

    PubMed

    Henneghan, Ashley M; Carter, Patricia; Stuifbergan, Alexa; Parmelee, Brennan; Kesler, Shelli

    2018-04-23

    Links have been made between aspects of sleep quality and cognitive function in breast cancer survivors (BCS), but findings are heterogeneous. The objective of this study is to examine relationships between specific sleep quality components (latency, duration, efficiency, daytime sleepiness, sleep disturbance, use of sleep aids) and cognitive impairment (performance and perceived), and determine which sleep quality components are the most significant contributors to cognitive impairments in BCS 6 months to 10 years post chemotherapy. Women 21 to 65 years old with a history of non-metastatic breast cancer following chemotherapy completion were recruited. Data collection included surveys to evaluate sleep quality and perceived cognitive impairments, and neuropsychological testing to evaluate verbal fluency and memory. Descriptive statistics, bivariate correlations, and hierarchical multiple regression were calculated. 90 women (mean age 49) completed data collection. Moderate significant correlations were found between daytime dysfunction, sleep efficiency, sleep latency, and sleep disturbance and perceived cognitive impairment (Rs = -0.37 to -0.49, Ps<.00049), but not objective cognitive performance of verbal fluency, memory or attention. After accounting for individual and clinical characteristics, the strongest predictors of perceived cognitive impairments were daytime dysfunction, sleep efficiency, and sleep disturbance. Findings support links between sleep quality and perceived cognitive impairments in BCS and suggest specific components of sleep quality (daytime dysfunction, sleep efficiency, and sleep disturbance) are associated with perceived cognitive functioning in this population. Findings can assist clinicians in guiding survivors to manage sleep and cognitive problems and aid in the design of interventional research. This article is protected by copyright. All rights reserved.

  4. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  5. Concurrent Memory Load Can Make RSVP Search More Efficient

    ERIC Educational Resources Information Center

    Gil-Gomez de Liano, Beatriz; Botella, Juan

    2011-01-01

    The detrimental effect of increased memory load on selective attention has been demonstrated in many situations. However, in search tasks over time using RSVP methods, it is not clear how memory load affects attentional processes; no effects as well as beneficial and detrimental effects of memory load have been found in these types of tasks. The…

  6. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  7. MIROS: A Hybrid Real-Time Energy-Efficient Operating System for the Resource-Constrained Wireless Sensor Nodes

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El

    2014-01-01

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069

  8. MIROS: a hybrid real-time energy-efficient operating system for the resource-constrained wireless sensor nodes.

    PubMed

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; El Gholami, Khalid

    2014-09-22

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant.

  9. The Fritz Roethlisberger Memorial Award Goes to "Using Leadered Groups in Organizational Behavior and Management Survey Courses"

    ERIC Educational Resources Information Center

    Amoroso, Lisa M.; Loyd, Denise Lewin; Hoobler, Jenny M.

    2012-01-01

    The Fritz J. Roethlisberger Memorial Award for the best article in the 2011 "Journal of Management Education" goes to Rae Andre for her article, Using Leadered Groups in Organizational Behavior and Management Survey Courses ("Journal of Management Education," Volume 35, Number 5, pp. 596-619). In keeping with Roethlisberger's legacy, this year's…

  10. Encoding: The Keystone to Efficient Functioning of Verbal Short-Term Memory

    ERIC Educational Resources Information Center

    Barry, Johanna G.; Sabisch, Beate; Friederici, Angela D.; Brauer, Jens

    2011-01-01

    Verbal short-term memory (VSTM) is thought to play a critical role in language learning. It is indexed by the nonword repetition task where listeners are asked to repeat meaningless words like "blonterstaping". The present study investigated the effect on nonword repetition performance of differences in efficiency of functioning of some part of…

  11. High speed finite element simulations on the graphics card

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthwaite, P.; Lowe, M. J. S.

    A software package is developed to perform explicit time domain finite element simulations of ultrasonic propagation on the graphical processing unit, using Nvidia’s CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy’ partitioner and a new, more efficient ‘aligned’ partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The technique is compared to a commercial CPU equivalent,more » demonstrating an overall speedup of at least 100 for a non-destructive testing weld model.« less

  12. Assessment of short- and long-term memory in trends of major climatic variables over Iran: 1966-2015

    NASA Astrophysics Data System (ADS)

    Mianabadi, Ameneh; Shirazi, Pooya; Ghahraman, Bijan; Coenders-Gerrits, A. M. J.; Alizadeh, Amin; Davary, Kamran

    2018-02-01

    In arid and semi-arid regions, water scarcity is the crucial issue for crop production. Identifying the spatial and temporal trends in aridity, especially during the crop-growing season, is important for farmers to manage their agricultural practices. This will become especially relevant when considering climate change projections. To reliably determine the actual trends, the influence of short- and long-term memory should be removed from the trend analysis. The objective of this study is to investigate the effect of short- and long-term memory on estimates of trends in two aridity indicators—the inverted De Martonne (ϕ IDM ) and Budyko (ϕ B ) indices. The analysis is done using precipitation and temperature data over Iran for a 50-year period (1966-2015) at three temporal scales: annual, wheat-growing season (October-June), and maize-growing season (May-November). For this purpose, the original and the modified Mann-Kendall tests (i.e., modified by three methods of trend free pre-whitening (TFPT), effective sample size (ESS), and long-term persistence (LTP)) are used to investigate the temporal trends in aridity indices, precipitation, and temperature by taking into account the effect of short- and long-term memory. Precipitation and temperature data were provided by the Islamic Republic of Iran Meteorological Organization (IRIMO). The temporal trend analysis showed that aridity increased from 1966 to 2015 at the annual and wheat-growing season scales, which is due to a decreasing trend in precipitation and an increasing trend in mean temperature at these two timescales. The trend in aridity indices was decreasing in the maize-growing season, since precipitation has an increasing trend for most parts of Iran in that season. The increasing trend in aridity indices is significant in Western Iran, which can be related to the significantly more negative trend in precipitation in the West. This increasing trend in aridity could result in an increasing crop water requirement and a significant reduction in the crop production and water use efficiency. Furthermore, the modified Mann-Kendall tests indicated that unlike temperature series, precipitation, ϕ IDM , and ϕ B series are not affected by short- and long-term memory. Our results can help decision makers and water resource managers to adopt appropriate policy strategies for sustainable development in the field of irrigated agriculture and water resources management.

  13. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  14. Performance analysis and comparison of a minimum interconnections direct storage model with traditional neural bidirectional memories.

    PubMed

    Bhatti, A Aziz

    2009-12-01

    This study proposes an efficient and improved model of a direct storage bidirectional memory, improved bidirectional associative memory (IBAM), and emphasises the use of nanotechnology for efficient implementation of such large-scale neural network structures at a considerable lower cost reduced complexity, and less area required for implementation. This memory model directly stores the X and Y associated sets of M bipolar binary vectors in the form of (MxN(x)) and (MxN(y)) memory matrices, requires O(N) or about 30% of interconnections with weight strength ranging between +/-1, and is computationally very efficient as compared to sequential, intraconnected and other bidirectional associative memory (BAM) models of outer-product type that require O(N(2)) complex interconnections with weight strength ranging between +/-M. It is shown that it is functionally equivalent to and possesses all attributes of a BAM of outer-product type, and yet it is simple and robust in structure, very large scale integration (VLSI), optical and nanotechnology realisable, modular and expandable neural network bidirectional associative memory model in which the addition or deletion of a pair of vectors does not require changes in the strength of interconnections of the entire memory matrix. The analysis of retrieval process, signal-to-noise ratio, storage capacity and stability of the proposed model as well as of the traditional BAM has been carried out. Constraints on and characteristics of unipolar and bipolar binaries for improved storage and retrieval are discussed. The simulation results show that it has log(e) N times higher storage capacity, superior performance, faster convergence and retrieval time, when compared to traditional sequential and intraconnected bidirectional memories.

  15. Coherent Optical Memory with High Storage Efficiency and Large Fractional Delay

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite A.

    2013-02-01

    A high-storage efficiency and long-lived quantum memory for photons is an essential component in long-distance quantum communication and optical quantum computation. Here, we report a 78% storage efficiency of light pulses in a cold atomic medium based on the effect of electromagnetically induced transparency. At 50% storage efficiency, we obtain a fractional delay of 74, which is the best up-to-date record. The classical fidelity of the recalled pulse is better than 90% and nearly independent of the storage time, as confirmed by the direct measurement of phase evolution of the output light pulse with a beat-note interferometer. Such excellent phase coherence between the stored and recalled light pulses suggests that the current result may be readily applied to single photon wave packets. Our work significantly advances the technology of electromagnetically induced transparency-based optical memory and may find practical applications in long-distance quantum communication and optical quantum computation.

  16. Coherent optical memory with high storage efficiency and large fractional delay.

    PubMed

    Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite A

    2013-02-22

    A high-storage efficiency and long-lived quantum memory for photons is an essential component in long-distance quantum communication and optical quantum computation. Here, we report a 78% storage efficiency of light pulses in a cold atomic medium based on the effect of electromagnetically induced transparency. At 50% storage efficiency, we obtain a fractional delay of 74, which is the best up-to-date record. The classical fidelity of the recalled pulse is better than 90% and nearly independent of the storage time, as confirmed by the direct measurement of phase evolution of the output light pulse with a beat-note interferometer. Such excellent phase coherence between the stored and recalled light pulses suggests that the current result may be readily applied to single photon wave packets. Our work significantly advances the technology of electromagnetically induced transparency-based optical memory and may find practical applications in long-distance quantum communication and optical quantum computation.

  17. Technology support of the handover: promoting observability, flexibility and efficiency.

    PubMed

    Patterson, Emily S

    2012-12-01

    Efforts to standardise data elements and increase the comprehensiveness of information included in patient handovers have produced a growing interest in augmenting the verbal exchange of information with written communications conducted through health information technology (HIT). The aim of this perspective is to offer recommendations to optimise technology support of handovers, based on a review of the relevant scientific literature. Review of the literature on human factors and the study of communication produced three recommendations. The first entails making available "shared knowledge" relevant to the handover and subsequent clinical management with intended and unintended recipients. The second is to create a flexible narrative structure (unstructured text fields) for human-human communications facilitated by technology. The third recommendation is to avoid reliance on real-time data entry during busy periods. Implementing these recommendations is anticipated to increase the observability (the ability to readily determine current status), flexibility, and efficiency of HIT-supported patient handovers. Anticipated benefits of technology-supported handovers include reducing reliance on human memory, increasing the efficiency and structure of the verbal exchange, avoiding readbacks of numeric data, and aiding clinical management following the handover. In cases when verbal handovers are delayed, do not occur, or involve members of the health care team without first-hand access to critical information, making 'common ground' observable for all recipients, creating a flexible narrative structure for communication and avoiding reliance on real-time data entry during the busiest times has implications for HIT design and day to day data entry and management operations. Benefits include increased observability, flexibility, and efficiency of HIT-supported patient handovers.

  18. The Development of Memory Efficiency and Value-Directed Remembering Across the Lifespan: A Cross-Sectional Study of Memory and Selectivity

    PubMed Central

    Castel, Alan D.; Humphreys, Kathryn L.; Lee, Steve S.; Galván, Adriana; Balota, David A.; McCabe, David P.

    2012-01-01

    Although attentional control and memory change considerably across the lifespan, no research has examined how the ability to strategically remember important information (i.e., value-directed remembering) changes from childhood to old age. The present study examined this in different age groups across the lifespan (N=320, 5 to 96 years old). We employed a selectivity task where participants were asked to study and recall items worth different point values in order to maximize their point score. This procedure allowed for measures of memory quantity/capacity (number of words recalled) and memory efficiency/selectivity (the recall of high-value items relative to low-value items). Age-related differences were found for memory capacity, as young adults recalled more words than the other groups. However, in terms of selectivity, younger and older adults were more selective than adolescents and children. The dissociation between these measures across the lifespan illustrates important age-related differences in terms of memory capacity and the ability to selectively remember high-value information. PMID:21942664

  19. The development of memory efficiency and value-directed remembering across the life span: a cross-sectional study of memory and selectivity.

    PubMed

    Castel, Alan D; Humphreys, Kathryn L; Lee, Steve S; Galván, Adriana; Balota, David A; McCabe, David P

    2011-11-01

    Although attentional control and memory change considerably across the life span, no research has examined how the ability to strategically remember important information (i.e., value-directed remembering) changes from childhood to old age. The present study examined this in different age groups across the life span (N = 320, 5-96 years old). A selectivity task was used in which participants were asked to study and recall items worth different point values in order to maximize their point score. This procedure allowed for measures of memory quantity/capacity (number of words recalled) and memory efficiency/selectivity (the recall of high-value items relative to low-value items). Age-related differences were found for memory capacity, as young adults recalled more words than the other groups. However, in terms of selectivity, younger and older adults were more selective than adolescents and children. The dissociation between these measures across the life span illustrates important age-related differences in terms of memory capacity and the ability to selectively remember high-value information.

  20. Blanket Gate Would Address Blocks Of Memory

    NASA Technical Reports Server (NTRS)

    Lambe, John; Moopenn, Alexander; Thakoor, Anilkumar P.

    1988-01-01

    Circuit-chip area used more efficiently. Proposed gate structure selectively allows and restricts access to blocks of memory in electronic neural-type network. By breaking memory into independent blocks, gate greatly simplifies problem of reading from and writing to memory. Since blocks not used simultaneously, share operational amplifiers that prompt and read information stored in memory cells. Fewer operational amplifiers needed, and chip area occupied reduced correspondingly. Cost per bit drops as result.

  1. Memory Efficient Ranking.

    ERIC Educational Resources Information Center

    Moffat, Alistair; And Others

    1994-01-01

    Describes an approximate document ranking process that uses a compact array of in-memory, low-precision approximations for document length. Combined with another rule for reducing the memory required by partial similarity accumulators, the approximation heuristic allows the ranking of large document collections using less than one byte of memory…

  2. Molecular Mechanisms Underlying Formation of Long-Term Reward Memories and Extinction Memories in the Honeybee ("Apis Mellifera")

    ERIC Educational Resources Information Center

    Eisenhardt, Dorothea

    2014-01-01

    The honeybee ("Apis mellifera") has long served as an invertebrate model organism for reward learning and memory research. Its capacity for learning and memory formation is rooted in the ecological need to efficiently collect nectar and pollen during summer to ensure survival of the hive during winter. Foraging bees learn to associate a…

  3. Memory Compression Techniques for Network Address Management in MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yanfei; Archer, Charles J.; Blocksome, Michael

    MPI allows applications to treat processes as a logical collection of integer ranks for each MPI communicator, while internally translating these logical ranks into actual network addresses. In current MPI implementations the management and lookup of such network addresses use memory sizes that are proportional to the number of processes in each communicator. In this paper, we propose a new mechanism, called AV-Rankmap, for managing such translation. AV-Rankmap takes advantage of logical patterns in rank-address mapping that most applications naturally tend to have, and it exploits the fact that some parts of network address structures are naturally more performance criticalmore » than others. It uses this information to compress the memory used for network address management. We demonstrate that AV-Rankmap can achieve performance similar to or better than that of other MPI implementations while using significantly less memory.« less

  4. Efficiency at rest: magnetoencephalographic resting-state connectivity and individual differences in verbal working memory.

    PubMed

    del Río, David; Cuesta, Pablo; Bajo, Ricardo; García-Pacios, Javier; López-Higes, Ramón; del-Pozo, Francisco; Maestú, Fernando

    2012-11-01

    Inter-individual differences in cognitive performance are based on an efficient use of task-related brain resources. However, little is known yet on how these differences might be reflected on resting-state brain networks. Here we used Magnetoencephalography resting-state recordings to assess the relationship between a behavioral measurement of verbal working memory and functional connectivity as measured through Mutual Information. We studied theta (4-8 Hz), low alpha (8-10 Hz), high alpha (10-13 Hz), low beta (13-18 Hz) and high beta (18-30 Hz) frequency bands. A higher verbal working memory capacity was associated with a lower mutual information in the low alpha band, prominently among right-anterior and left-lateral sensors. The results suggest that an efficient brain organization in the domain of verbal working memory might be related to a lower resting-state functional connectivity across large-scale brain networks possibly involving right prefrontal and left perisylvian areas. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. GenomicTools: a computational platform for developing high-throughput analytics in genomics.

    PubMed

    Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo

    2012-01-15

    Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.

  6. Changing organizational structure and organizational memory in primary care practices: a qualitative interview study.

    PubMed

    Alyahya, Mohammad

    2012-02-01

    Organizational structure is built through dynamic processes which blend historical force and management decisions, as a part of a broader process of constructing organizational memory (OM). OM is considered to be one of the main competences leading to the organization's success. This study focuses on the impact of the Quality and Outcome Framework (QOF), which is a Pay-for-Performance scheme, on general practitioner (GP) practices in the UK. The study is based on semistructured interviews with four GP practices in the north of England involving 39 informants. The findings show that the way practices assigned different functions into specialized units, divisions or departments shows the degree of specialization in their organizational structures. More specialized unit arrangements, such as an IT division, particular chronic disease clinics or competence-based job distributions enhanced procedural memory development through enabling regular use of knowledge in specific context, which led to competence building. In turn, such competence at particular functions or jobs made it possible for the practices to achieve their goals more efficiently. This study concludes that organizational structure contributed strongly to the enhancement of OM, which in turn led to better organizational competence.

  7. A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement

    PubMed Central

    Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2016-01-01

    Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates. PMID:26763827

  8. A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement.

    PubMed

    Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2016-01-01

    Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates.

  9. Efficiency Enhancement in DC Pulsed Gas Discharge Memory Panel

    NASA Astrophysics Data System (ADS)

    Okamoto, Yukio

    1983-01-01

    Much improvement in the luminous efficiency of a dc pulsed gas discharge memory panel for color TV display was achieved by shortening the sustaining pulse duration. High energy electrons can thus be produced in the pulsed discharge with fast rise times. Calculated optimum value of E/P in a Xe gas discharge is 7-8 V/cm\\cdotTorr.

  10. Scalable Triadic Analysis of Large-Scale Graphs: Multi-Core vs. Multi-Processor vs. Multi-Threaded Shared Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Marquez, Andres; Choudhury, Sutanay

    2012-09-01

    Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less

  11. On the predictability of extreme events in records with linear and nonlinear long-range memory: Efficiency and noise robustness

    NASA Astrophysics Data System (ADS)

    Bogachev, Mikhail I.; Bunde, Armin

    2011-06-01

    We study the predictability of extreme events in records with linear and nonlinear long-range memory in the presence of additive white noise using two different approaches: (i) the precursory pattern recognition technique (PRT) that exploits solely the information about short-term precursors, and (ii) the return interval approach (RIA) that exploits long-range memory incorporated in the elapsed time after the last extreme event. We find that the PRT always performs better when only linear memory is present. In the presence of nonlinear memory, both methods demonstrate comparable efficiency in the absence of white noise. When additional white noise is present in the record (which is the case in most observational records), the efficiency of the PRT decreases monotonously with increasing noise level. In contrast, the RIA shows an abrupt transition between a phase of low level noise where the prediction is as good as in the absence of noise, and a phase of high level noise where the prediction becomes poor. In the phase of low and intermediate noise the RIA predicts considerably better than the PRT, which explains our recent findings in physiological and financial records.

  12. Incidental recall on WAIS-R digit symbol discriminates Alzheimer's and Parkinson's diseases.

    PubMed

    Demakis, G J; Sawyer, T P; Fritz, D; Sweet, J J

    2001-03-01

    The purpose of this study was to examine how Alzheimer's (n = 37) and Parkinson's (n = 21) patients perform on the incidental recall adaptation to the Digit Symbol of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) and how such performance is related to established cognitive efficiency and memory measures. This adaptation requires the examinee to complete the entire subtest and then, without warning, to immediately recall the symbols associated with each number. Groups did not differ significantly on standard Digit Symbol administration (90 seconds), but on recall Parkinson's patients recalled significantly more symbols and symbol-number pairs than Alzheimer's patients. Using only the number of symbols recalled, discriminate function analysis correctly classified 76% of these patients. Correlations between age-corrected scaled score, symbols incidentally recalled, and established measures of cognitive efficiency and memory provided evidence of convergent and divergent validity. Age-corrected scaled scores were more consistently and strongly related to cognitive efficiency, whereas symbols recalled were more consistently and strongly related to memory measures. These findings suggest that the Digit Symbol recall adaptation is actually assessing memory and that it can be another useful way to detect memory impairment. Copyright 2001 John Wiley & Sons, Inc.

  13. Simple and Efficient Single Photon Filter for a Rb-based Quantum Memory

    NASA Astrophysics Data System (ADS)

    Stack, Daniel; Li, Xiao; Quraishi, Qudsia

    2015-05-01

    Distribution of entangled quantum states over significant distances is important to the development of future quantum technologies such as long-distance cryptography, networks of atomic clocks, distributed quantum computing, etc. Long-lived quantum memories and single photons are building blocks for systems capable of realizing such applications. The ability to store and retrieve quantum information while filtering unwanted light signals is critical to the operation of quantum memories based on neutral-atom ensembles. We report on an efficient frequency filter which uses a glass cell filled with 85Rb vapor to attenuate noise photons by an order of magnitude with little loss to the single photons associated with the operation of our cold 87Rb quantum memory. An Ar buffer gas is required to differentiate between signal and noise photons or similar statement. Our simple, passive filter requires no optical pumping or external frequency references and provides an additional 18 dB attenuation of our pump laser for every 1 dB loss of the single photon signal. We observe improved non-classical correlations and our data shows that the addition of a frequency filter increases the non-classical correlations and readout efficiency of our quantum memory by ~ 35%.

  14. Aging, Memory Efficiency and the Strategic Control of Attention at Encoding: Impairments of Value-Directed Remembering in Alzheimer's Disease

    PubMed Central

    Castel, Alan D.; Balota, David A.; McCabe, David P.

    2009-01-01

    Selecting what is important to remember, attending to this information, and then later recalling it can be thought of in terms of the strategic control of attention and the efficient use of memory. In order to examine whether aging and Alzheimer's disease (AD) influenced this ability, the present study used a selectivity task, where studied items were worth various point values and participants were asked to maximize the value of the items they recalled. Relative to younger adults (N=35) and healthy older adults (N=109), individuals with very mild AD (N=41) and mild AD (N=13) showed impairments in the strategic and efficient encoding and recall of high value items. Although individuals with AD recalled more high value items than low value items, they did not efficiently maximize memory performance (as measured by a selectivity index) relative to healthy older adults. Performance on complex working memory span tasks was related to the recall of the high value items but not low value items. This pattern suggests that relative to healthy aging, AD leads to impairments in strategic control at encoding and value-directed remembering. PMID:19413444

  15. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    PubMed

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  16. Research on the key technology of update of land survey spatial data based on embedded GIS and GPS

    NASA Astrophysics Data System (ADS)

    Chen, Dan; Liu, Yanfang; Yu, Hai; Xia, Yin

    2009-10-01

    According to the actual needs of the second land-use survey and the PDA's characteristics of small volume and small memory, it can be analyzed that the key technology of the data collection system of field survey based on GPS-PDA is the read speed of the data. In order to enhance the speed and efficiency of the analysis of the spatial data on mobile devices, we classify the layers of spatial data; get the Layer-Grid Index by getting the different levels and blocks of the layer of spatial data; then get the R-TREE index of the spatial data objects. Different scale levels of space are used in different levels management. The grid method is used to do the block management.

  17. Energy-efficient miniature-scale heat pumping based on shape memory alloys

    NASA Astrophysics Data System (ADS)

    Ossmer, Hinnerk; Wendler, Frank; Gueltig, Marcel; Lambrecht, Franziska; Miyazaki, Shuichi; Kohl, Manfred

    2016-08-01

    Cooling and thermal management comprise a major part of global energy consumption. The by far most widespread cooling technology today is vapor compression, reaching rather high efficiencies, but promoting global warming due to the use of environmentally harmful refrigerants. For widespread emerging applications using microelectronics and micro-electro-mechanical systems, thermoelectrics is the most advanced technology, which however hardly reaches coefficients of performance (COP) above 2.0. Here, we introduce a new approach for energy-efficient heat pumping using the elastocaloric effect in shape memory alloys. This development is mainly targeted at applications on miniature scales, while larger scales are envisioned by massive parallelization. Base materials are cold-rolled textured Ti49.1Ni50.5Fe0.4 foils of 30 μm thickness showing an adiabatic temperature change of +20/-16 K upon superelastic loading/unloading. Different demonstrator layouts consisting of mechanically coupled bridge structures with large surface-to-volume ratios are developed allowing for control by a single actuator as well as work recovery. Heat transfer times are in the order of 1 s, being orders of magnitude faster than for bulk geometries. Thus, first demonstrators achieve values of specific heating and cooling power of 4.5 and 2.9 W g-1, respectively. A maximum temperature difference of 9.4 K between heat source and sink is reached within 2 min. Corresponding COP on the device level are 4.9 (heating) and 3.1 (cooling).

  18. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  19. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  20. Age Differences in the Effects of Domain Knowledge on Reading Efficiency

    PubMed Central

    Miller, Lisa M. Soederberg

    2009-01-01

    The present study investigated age differences in the effects of knowledge on the efficiency with which information is processed while reading. Individuals between 18 and 85 years of age, with varying levels of cooking knowledge, read and recalled a series of short passages within the domain of cooking. Reading efficiency was operationalized as time spent reading divided by the amount recalled for each passage. Results showed that reading efficiency increased with increasing levels of knowledge among older but not younger adults. Similarly, those with smaller working memory capacities showed increasing efficiency with increasing knowledge. These findings suggest that knowledge promotes a more efficient allocation policy which is particularly helpful in later life, perhaps due to age-related declines in working memory capacity. PMID:19290738

  1. Memories for life: a review of the science and technology

    PubMed Central

    O'Hara, Kieron; Morris, Richard; Shadbolt, Nigel; Hitch, Graham J; Hall, Wendy; Beagrie, Neil

    2006-01-01

    This paper discusses scientific, social and technological aspects of memory. Recent developments in our understanding of memory processes and mechanisms, and their digital implementation, have placed the encoding, storage, management and retrieval of information at the forefront of several fields of research. At the same time, the divisions between the biological, physical and the digital worlds seem to be dissolving. Hence, opportunities for interdisciplinary research into memory are being created, between the life sciences, social sciences and physical sciences. Such research may benefit from immediate application into information management technology as a testbed. The paper describes one initiative, memories for life, as a potential common problem space for the various interested disciplines. PMID:16849265

  2. Learning and memory performance in a cohort of clinically referred breast cancer survivors: the role of attention versus forgetting in patient-reported memory complaints.

    PubMed

    Root, James C; Ryan, Elizabeth; Barnett, Gregory; Andreotti, Charissa; Bolutayo, Kemi; Ahles, Tim

    2015-05-01

    While forgetfulness is widely reported by breast cancer survivors, studies documenting objective memory performance yield mixed, largely inconsistent, results. Failure to find consistent, objective memory issues may be due to the possibility that cancer survivors misattribute their experience of forgetfulness to primary memory issues rather than to difficulties in attention at the time of learning. To clarify potential attention issues, factor scores for Attention Span, Learning Efficiency, Delayed Memory, and Inaccurate Memory were analyzed for the California Verbal Learning Test-Second Edition (CVLT-II) in 64 clinically referred breast cancer survivors with self-reported cognitive complaints; item analysis was conducted to clarify specific contributors to observed effects, and contrasts between learning and recall trials were compared with normative data. Performance on broader cognitive domains is also reported. The Attention Span factor, but not Learning Efficiency, Delayed Memory, or Inaccurate Memory factors, was significantly affected in this clinical sample. Contrasts between trials were consistent with normative data and did not indicate greater loss of information over time than in the normative sample. Results of this analysis suggest that attentional dysfunction may contribute to subjective and objective memory complaints in breast cancer survivors. These results are discussed in the context of broader cognitive effects following treatment for clinicians who may see cancer survivors for assessment. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Training of Attentional Filtering, but Not of Memory Storage, Enhances Working Memory Efficiency by Strengthening the Neuronal Gatekeeper Network.

    PubMed

    Schmicker, Marlen; Schwefel, Melanie; Vellage, Anne-Katrin; Müller, Notger G

    2016-04-01

    Memory training (MT) in older adults with memory deficits often leads to frustration and, therefore, is usually not recommended. Here, we pursued an alternative approach and looked for transfer effects of 1-week attentional filter training (FT) on working memory performance and its neuronal correlates in young healthy humans. The FT effects were compared with pure MT, which lacked the necessity to filter out irrelevant information. Before and after training, all participants performed an fMRI experiment that included a combined task in which stimuli had to be both filtered based on color and stored in memory. We found that training induced processing changes by biasing either filtering or storage. FT induced larger transfer effects on the untrained cognitive function than MT. FT increased neuronal activity in frontal parts of the neuronal gatekeeper network, which is proposed to hinder irrelevant information from being unnecessarily stored in memory. MT decreased neuronal activity in the BG part of the gatekeeper network but enhanced activity in the parietal storage node. We take these findings as evidence that FT renders working memory more efficient by strengthening the BG-prefrontal gatekeeper network. MT, on the other hand, simply stimulates storage of any kind of information. These findings illustrate a tight connection between working memory and attention, and they may open up new avenues for ameliorating memory deficits in patients with cognitive impairments.

  4. Prevalence of impaired memory in hospitalized adults and associations with in-hospital sleep loss.

    PubMed

    Calev, Hila; Spampinato, Lisa M; Press, Valerie G; Meltzer, David O; Arora, Vineet M

    2015-07-01

    Effective inpatient teaching requires intact patient memory, but studies suggest hospitalized adults may have memory deficits. Sleep loss among inpatients could contribute to memory impairment. To assess memory in older hospitalized adults, and to test the association between sleep quantity, sleep quality, and memory, in order to identify a possible contributor to memory deficits in these patients. Prospective cohort study. General medicine and hematology/oncology inpatient wards. Fifty-nine hospitalized adults at least 50 years of age with no diagnosed sleep disorder. Immediate memory and memory after a 24-hour delay were assessed using a word recall and word recognition task from the University of Southern California Repeatable Episodic Memory Test. A vignette-based memory task was piloted as an alternative test more closely resembling discharge instructions. Sleep duration and efficiency overnight in the hospital were measured using actigraphy. Mean immediate recall was 3.8 words out of 15 (standard deviation = 2.1). Forty-nine percent of subjects had poor memory, defined as immediate recall score of 3 or lower. Median immediate recognition was 11 words out of 15 (interquartile range [IQR] = 9-13). Median delayed recall score was 1 word, and median delayed recognition was 10 words (IQR = 8-12). In-hospital sleep duration and efficiency were not significantly associated with memory. The medical vignette score was correlated with immediate recall (r = 0.49, P < 0.01). About half of the inpatients studied had poor memory while in the hospital, signaling that hospitalization might not be an ideal teachable moment. In-hospital sleep was not associated with memory scores. © 2015 Society of Hospital Medicine.

  5. Prevalence of Impaired Memory in Hospitalized Adults and Associations with In-Hospital Sleep Loss

    PubMed Central

    Calev, Hila; Spampinato, Lisa M; Press, Valerie G; Meltzer, David O; Arora, Vineet M

    2015-01-01

    Background Effective inpatient teaching requires intact patient memory, but studies suggest hospitalized adults may have memory deficits. Sleep loss among inpatients could contribute to memory impairment. Objective To assess memory in older hospitalized adults, and to test the association between sleep quantity, sleep quality and memory, in order to identify a possible contributor to memory deficits in these patients. Design Prospective cohort study Setting General medicine and hematology/oncology inpatient wards Patients 59 hospitalized adults at least 50 years of age with no diagnosed sleep disorder. Measurements Immediate memory and memory after a 24-hour delay were assessed using a word recall and word recognition task from the University of Southern California Repeatable Episodic Memory Test (USC-REMT). A vignette-based memory task was piloted as an alternative test more closely resembling discharge instructions. Sleep duration and efficiency overnight in the hospital were measured using actigraphy. Results Mean immediate recall was 3.8 words out of 15 (SD=2.1). Forty-nine percent of subjects had poor memory, defined as immediate recall score of 3 or lower. Median immediate recognition was 11 words out of 15 (IQR=9, 13). Median delayed recall score was 1 word and median delayed recognition was 10 words (IQR= 8–12). In-hospital sleep duration and efficiency were not significantly associated with memory. The medical vignette score was correlated with immediate recall (r=0.49, p<0.01) Conclusions About half of inpatients studied had poor memory while in the hospital, signaling that hospitalization might not be an ideal teachable moment. In-hospital sleep was not associated with memory scores. PMID:25872763

  6. Recurrent Neural Networks With Auxiliary Memory Units.

    PubMed

    Wang, Jianyong; Zhang, Lei; Guo, Quan; Yi, Zhang

    2018-05-01

    Memory is one of the most important mechanisms in recurrent neural networks (RNNs) learning. It plays a crucial role in practical applications, such as sequence learning. With a good memory mechanism, long term history can be fused with current information, and can thus improve RNNs learning. Developing a suitable memory mechanism is always desirable in the field of RNNs. This paper proposes a novel memory mechanism for RNNs. The main contributions of this paper are: 1) an auxiliary memory unit (AMU) is proposed, which results in a new special RNN model (AMU-RNN), separating the memory and output explicitly and 2) an efficient learning algorithm is developed by employing the technique of error flow truncation. The proposed AMU-RNN model, together with the developed learning algorithm, can learn and maintain stable memory over a long time range. This method overcomes both the learning conflict problem and gradient vanishing problem. Unlike the traditional method, which mixes the memory and output with a single neuron in a recurrent unit, the AMU provides an auxiliary memory neuron to maintain memory in particular. By separating the memory and output in a recurrent unit, the problem of learning conflicts can be eliminated easily. Moreover, by using the technique of error flow truncation, each auxiliary memory neuron ensures constant error flow during the learning process. The experiments demonstrate good performance of the proposed AMU-RNNs and the developed learning algorithm. The method exhibits quite efficient learning performance with stable convergence in the AMU-RNN learning and outperforms the state-of-the-art RNN models in sequence generation and sequence classification tasks.

  7. The public hospital of the future.

    PubMed

    Zajac, Jeffrey D

    2003-09-01

    Public hospitals designed for the past are not changing rapidly enough to meet the needs of the future. Changing work practices, increased pressure on bed occupancy, and greater numbers of patients with complex diseases and comorbidities will determine the functions of future hospitals. To maximise the use of resources, hospital "down times" on weekends and public holidays will be a distant memory. Elective surgery will increase in the traditionally "quiet times", such as summer, and decrease in the busy winter period. The patient will be the focus of an efficient information flow, streamlining patient care in hospital and enhancing communication between hospitals and community-based health providers. General and specialty units will need to work more efficiently together, as general physicians take on the role of patient case managers for an increasing proportion of patients. Funding needs to be adequate, and system management should involve clinicians. Safety will be enshrined in hospital systems and procedures, as well as in the minds of hospital staff. If these changes are not implemented successfully, public hospitals will not survive in the future.

  8. Working Memory Capacity and Recall from Long-Term Memory: Examining the Influences of Encoding Strategies, Study Time Allocation, Search Efficiency, and Monitoring Abilities

    ERIC Educational Resources Information Center

    Unsworth, Nash

    2016-01-01

    The relation between working memory capacity (WMC) and recall from long-term memory (LTM) was examined in the current study. Participants performed multiple measures of delayed free recall varying in presentation duration and self-reported their strategy usage after each task. Participants also performed multiple measures of WMC. The results…

  9. Working Memory Intervention: A Reading Comprehension Approach

    ERIC Educational Resources Information Center

    Perry, Tracy L.; Malaia, Evguenia

    2013-01-01

    For any complex mental task, people rely on working memory. Working memory capacity (WMC) is one predictor of success in learning. Historically, attempts to improve verbal WM through training have not been effective. This study provided elementary students with WM consolidation efficiency training to answer the question, Can reading comprehension…

  10. Neural Correlates of Prospective Memory across the Lifespan

    ERIC Educational Resources Information Center

    Zollig, Jacqueline; West, Robert; Martin, Mike; Altgassen, Mareike; Lemke, Ulrike; Kliegel, Matthias

    2007-01-01

    Overview: Behavioural data reveal an inverted U-shaped function in the efficiency of prospective memory from childhood to young adulthood to later adulthood. However, prior research has not directly compared processes contributing to age-related variation in prospective memory across the lifespan, hence it is unclear whether the same factors…

  11. Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989

    NASA Technical Reports Server (NTRS)

    Raugh, Michael R. (Editor)

    1989-01-01

    Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis.

  12. Weighing the value of memory loss in the surgical evaluation of left temporal lobe epilepsy: A decision analysis

    PubMed Central

    Akama-Garren, Elliot H.; Bianchi, Matt T.; Leveroni, Catherine; Cole, Andrew J.; Cash, Sydney S.; Westover, M. Brandon

    2016-01-01

    SUMMARY Objectives Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. Methods We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. Results For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Significance Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. PMID:25244498

  13. Weighing the value of memory loss in the surgical evaluation of left temporal lobe epilepsy: a decision analysis.

    PubMed

    Akama-Garren, Elliot H; Bianchi, Matt T; Leveroni, Catherine; Cole, Andrew J; Cash, Sydney S; Westover, M Brandon

    2014-11-01

    Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  14. Selective scanpath repetition during memory-guided visual search.

    PubMed

    Wynn, Jordana S; Bone, Michael B; Dragan, Michelle C; Hoffman, Kari L; Buchsbaum, Bradley R; Ryan, Jennifer D

    2016-01-02

    Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1-V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity.

  15. Memory capacity, selective control, and value-directed remembering in children with and without attention-deficit/hyperactivity disorder (ADHD).

    PubMed

    Castel, Alan D; Lee, Steve S; Humphreys, Kathryn L; Moore, Amy N

    2011-01-01

    The ability to select what is important to remember, to attend to this information, and to recall high-value items leads to the efficient use of memory. The present study examined how children with and without attention-deficit/hyperactivity disorder (ADHD) performed on an incentive-based selectivity task in which to-be-remembered items were worth different point values. Participants were 6-9 year old children with ADHD (n = 57) and without ADHD (n = 59). Using a selectivity task, participants studied words paired with point values and were asked to maximize their score, which was the overall value of the items they recalled. This task allows for measures of memory capacity and the ability to selectively remember high-value items. Although there were no significant between-groups differences in the number of words recalled (memory capacity), children with ADHD were less selective than children in the control group in terms of the value of the items they recalled (control of memory). All children recalled more high-value items than low-value items and showed some learning with task experience, but children with ADHD Combined type did not efficiently maximize memory performance (as measured by a selectivity index) relative to children with ADHD Inattentive type and healthy controls, who did not differ significantly from one another. Children with ADHD Combined type exhibit impairments in the strategic and efficient encoding and recall of high-value items. The findings have implications for theories of memory dysfunction in childhood ADHD and the key role of metacognition, cognitive control, and value-directed remembering when considering the strategic use of memory. (c) 2010 APA, all rights reserved

  16. Why are You Late?: Investigating the Role of Time Management in Time-Based Prospective Memory

    PubMed Central

    Waldum, Emily R; McDaniel, Mark A.

    2016-01-01

    Time-based prospective memory tasks (TBPM) are those that are to be performed at a specific future time. Contrary to typical laboratory TBPM tasks (e.g., “hit the “z” key every 5 minutes”), many real-world TBPM tasks require more complex time-management processes. For instance to attend an appointment on time, one must estimate the duration of the drive to the appointment and then utilize this estimate to create and execute a secondary TBPM intention (e.g., “I need to start driving by 1:30 to make my 2:00 appointment on time”). Future under- and overestimates of drive time can lead to inefficient TBPM performance with the former lending to missed appointments and the latter to long stints in the waiting room. Despite the common occurrence of complex TBPM tasks in everyday life, to date, no studies have investigated how components of time management, including time estimation, affect behavior in such complex TBPM tasks. Therefore, the current study aimed to investigate timing biases in both older and younger adults and further to determine how such biases along with additional time management components including planning and plan fidelity influence complex TBPM performance. Results suggest for the first time that younger and older adults do not always utilize similar timing strategies, and as a result, can produce differential timing biases under the exact same environmental conditions. These timing biases, in turn, play a vital role in how efficiently both younger and older adults perform a later TBPM task that requires them to utilize their earlier time estimate. PMID:27336325

  17. Large efficiency at telecom wavelength for optical quantum memories.

    PubMed

    Dajczgewand, Julián; Le Gouët, Jean-Louis; Louchet-Chauvet, Anne; Chanelière, Thierry

    2014-05-01

    We implement the ROSE protocol in an erbium-doped solid, compatible with the telecom range. The ROSE scheme is an adaptation of the standard two-pulse photon echo to make it suitable for a quantum memory. We observe a retrieval efficiency of 40% for a weak laser pulse in the forward direction by using specific orientations of the light polarizations, magnetic field, and crystal axes.

  18. Feasibility study of current pulse induced 2-bit/4-state multilevel programming in phase-change memory

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin

    2017-08-01

    In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.

  19. Enabling the High Level Synthesis of Data Analytics Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino

    Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less

  20. Research about Memory Detection Based on the Embedded Platform

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Chu, Jian

    As is known to us all, the resources of memory detection of the embedded systems are very limited. Taking the Linux-based embedded arm as platform, this article puts forward two efficient memory detection technologies according to the characteristics of the embedded software. Especially for the programs which need specific libraries, the article puts forwards portable memory detection methods to help program designers to reduce human errors,improve programming quality and therefore make better use of the valuable embedded memory resource.

  1. Simplified Parallel Domain Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less

  2. Transformational electronics: a powerful way to revolutionize our information world

    NASA Astrophysics Data System (ADS)

    Rojas, Jhonathan P.; Torres Sevilla, Galo A.; Ghoneim, Mohamed T.; Hussain, Aftab M.; Ahmed, Sally M.; Nassar, Joanna M.; Bahabry, Rabab R.; Nour, Maha; Kutbee, Arwa T.; Byas, Ernesto; Al-Saif, Bidoor; Alamri, Amal M.; Hussain, Muhammad M.

    2014-06-01

    With the emergence of cloud computation, we are facing the rising waves of big data. It is our time to leverage such opportunity by increasing data usage both by man and machine. We need ultra-mobile computation with high data processing speed, ultra-large memory, energy efficiency and multi-functionality. Additionally, we have to deploy energy-efficient multi-functional 3D ICs for robust cyber-physical system establishment. To achieve such lofty goals we have to mimic human brain, which is inarguably the world's most powerful and energy efficient computer. Brain's cortex has folded architecture to increase surface area in an ultra-compact space to contain its neuron and synapses. Therefore, it is imperative to overcome two integration challenges: (i) finding out a low-cost 3D IC fabrication process and (ii) foldable substrates creation with ultra-large-scale-integration of high performance energy efficient electronics. Hence, we show a low-cost generic batch process based on trench-protect-peel-recycle to fabricate rigid and flexible 3D ICs as well as high performance flexible electronics. As of today we have made every single component to make a fully flexible computer including non-planar state-of-the-art FinFETs. Additionally we have demonstrated various solid-state memory, movable MEMS devices, energy harvesting and storage components. To show the versatility of our process, we have extended our process towards other inorganic semiconductor substrates such as silicon germanium and III-V materials. Finally, we report first ever fully flexible programmable silicon based microprocessor towards foldable brain computation and wirelessly programmable stretchable and flexible thermal patch for pain management for smart bionics.

  3. Goal-Driven Autonomy and Robust Architecture for Long-Duration Missions (Year 1: 1 July 2013 - 31 July 2014)

    DTIC Science & Technology

    2014-09-30

    Mental Domain = Ω Goal Management goal change goal input World =Ψ Memory Mission & Goals( ) World Model (-Ψ) Episodic Memory Semantic Memory ...Activations Trace Meta-Level Control Introspective Monitoring Memory Reasoning Trace ( ) Strategies Episodic Memory Metaknowledge Self Model...it is from incorrect or missing memory associations (i.e., indices). Similarly, correct information may exist in the input stream, but may not be

  4. Event-Based Prospective Memory Is Independently Associated with Self-Report of Medication Management in Older Adults

    PubMed Central

    Woods, Steven Paul; Weinborn, Michael; Maxwell, Brenton R.; Gummery, Alice; Mo, Kevin; Ng, Amanda R. J.; Bucks, Romola S.

    2014-01-01

    Background Identifying potentially modifiable risk factors for medication non-adherence in older adults is important in order to enhance screening and intervention efforts designed to improve medication-taking behavior and health outcomes. The current study sought to determine the unique contribution of prospective memory (i.e., “remembering to remember”) to successful self-reported medication management in older adults. Methods Sixty-five older adults with current medication prescriptions completed a comprehensive research evaluation of sociodemographic, psychiatric, and neurocognitive functioning, which included the Memory for Adherence to Medication Scale (MAMS), Prospective and Retrospective Memory Questionnaire (PRMQ), and a performance-based measure of prospective memory that measured both semantically-related and semantically-unrelated cue-intention (i.e., when-what) pairings. Results A series of hierarchical regressions controlling for biopsychosocial, other neurocognitive, and medication-related factors showed that elevated complaints on the PM scale of the PRMQ and worse performance on an objective semantically-unrelated event-based prospective memory task were independent predictors of poorer medication adherence as measured by the MAMS. Conclusions Prospective memory plays an important role in self-report of successful medication management among older adults. Findings may have implications for screening for older individuals “at risk” of non-adherence, as well as the development of prospective memory-based interventions to improve medication adherence and, ultimately, long-term health outcomes in older adults. PMID:24410357

  5. The Cognitive Bases of Intelligence Analysis.

    DTIC Science & Technology

    1984-01-01

    the truth of a single proposition or to discriminate among several propositions. Indicators represent the potentially observable events that form the ...serves as a checklist against which to evaluate an actual Intelligance product. * If the Ideal product Is specified In sufficient detail for a particular...34 Interf’arence In accessing memory occurs for both recognition and recall. Memory retrieval is most efficient when the memories are discriminable . Memories for

  6. Representational constraints on the development of memory and metamemory: a developmental-representational theory.

    PubMed

    Ceci, Stephen J; Fitneva, Stanka A; Williams, Wendy M

    2010-04-01

    Traditional accounts of memory development suggest that maturation of prefrontal cortex (PFC) enables efficient metamemory, which enhances memory. An alternative theory is described, in which changes in early memory and metamemory are mediated by representational changes, independent of PFC maturation. In a pilot study and Experiment 1, younger children failed to recognize previously presented pictures, yet the children could identify the context in which they occurred, suggesting these failures resulted from inefficient metamemory. Older children seldom exhibited such failure. Experiment 2 established that this was not due to retrieval-time recoding. Experiment 3 suggested that young children's representation of a picture's attributes explained their metamemory failure. Experiment 4 demonstrated that metamemory is age-invariant when representational quality is controlled: When stimuli were equivalently represented, age differences in memory and metamemory declined. These findings do not support the traditional view that as children develop, neural maturation permits more efficient monitoring, which leads to improved memory. These findings support a theory based on developmental-representational synthesis, in which constraints on metamemory are independent of neurological development; representational features drive early memory to a greater extent than previously acknowledged, suggesting that neural maturation has been overimputed as a source of early metamemory and memory failure. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  7. Processing efficiency theory in children: working memory as a mediator between trait anxiety and academic performance.

    PubMed

    Owens, Matthew; Stevenson, Jim; Norgate, Roger; Hadwin, Julie A

    2008-10-01

    Working memory skills are positively associated with academic performance. In contrast, high levels of trait anxiety are linked with educational underachievement. Based on Eysenck and Calvo's (1992) processing efficiency theory (PET), the present study investigated whether associations between anxiety and educational achievement were mediated via poor working memory performance. Fifty children aged 11-12 years completed verbal (backwards digit span; tapping the phonological store/central executive) and spatial (Corsi blocks; tapping the visuospatial sketchpad/central executive) working memory tasks. Trait anxiety was measured using the State-Trait Anxiety Inventory for Children. Academic performance was assessed using school administered tests of reasoning (Cognitive Abilities Test) and attainment (Standard Assessment Tests). The results showed that the association between trait anxiety and academic performance was significantly mediated by verbal working memory for three of the six academic performance measures (math, quantitative and non-verbal reasoning). Spatial working memory did not significantly mediate the relationship between trait anxiety and academic performance. On average verbal working memory accounted for 51% of the association between trait anxiety and academic performance, while spatial working memory only accounted for 9%. The findings indicate that PET is a useful framework to assess the impact of children's anxiety on educational achievement.

  8. Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1999-01-01

    The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.

  9. Cooperative Data Sharing: Simple Support for Clusters of SMP Nodes

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Balley, David H. (Technical Monitor)

    1997-01-01

    Libraries like PVM and MPI send typed messages to allow for heterogeneous cluster computing. Lower-level libraries, such as GAM, provide more efficient access to communication by removing the need to copy messages between the interface and user space in some cases. still lower-level interfaces, such as UNET, get right down to the hardware level to provide maximum performance. However, these are all still interfaces for passing messages from one process to another, and have limited utility in a shared-memory environment, due primarily to the fact that message passing is just another term for copying. This drawback is made more pertinent by today's hybrid architectures (e.g. clusters of SMPs), where it is difficult to know beforehand whether two communicating processes will share memory. As a result, even portable language tools (like HPF compilers) must either map all interprocess communication, into message passing with the accompanying performance degradation in shared memory environments, or they must check each communication at run-time and implement the shared-memory case separately for efficiency. Cooperative Data Sharing (CDS) is a single user-level API which abstracts all communication between processes into the sharing and access coordination of memory regions, in a model which might be described as "distributed shared messages" or "large-grain distributed shared memory". As a result, the user programs to a simple latency-tolerant abstract communication specification which can be mapped efficiently to either a shared-memory or message-passing based run-time system, depending upon the available architecture. Unlike some distributed shared memory interfaces, the user still has complete control over the assignment of data to processors, the forwarding of data to its next likely destination, and the queuing of data until it is needed, so even the relatively high latency present in clusters can be accomodated. CDS does not require special use of an MMU, which can add overhead to some DSM systems, and does not require an SPMD programming model. unlike some message-passing interfaces, CDS allows the user to implement efficient demand-driven applications where processes must "fight" over data, and does not perform copying if processes share memory and do not attempt concurrent writes. CDS also supports heterogeneous computing, dynamic process creation, handlers, and a very simple thread-arbitration mechanism. Additional support for array subsections is currently being considered. The CDS1 API, which forms the kernel of CDS, is built primarily upon only 2 communication primitives, one process initiation primitive, and some data translation (and marshalling) routines, memory allocation routines, and priority control routines. The entire current collection of 28 routines provides enough functionality to implement most (or all) of MPI 1 and 2, which has a much larger interface consisting of hundreds of routines. still, the API is small enough to consider integrating into standard os interfaces for handling inter-process communication in a network-independent way. This approach would also help to solve many of the problems plaguing other higher-level standards such as MPI and PVM which must, in some cases, "play OS" to adequately address progress and process control issues. The CDS2 API, a higher level of interface roughly equivalent in functionality to MPI and to be built entirely upon CDS1, is still being designed. It is intended to add support for the equivalent of communicators, reduction and other collective operations, process topologies, additional support for process creation, and some automatic memory management. CDS2 will not exactly match MPI, because the copy-free semantics of communication from CDS1 will be supported. CDS2 application programs will be free to carefully also use CDS1. CDS1 has been implemented on networks of workstations running unmodified Unix-based operating systems, using UDP/IP and vendor-supplied high- performance locks. Although its inter-node performance is currently unimpressive due to rudimentary implementation technique, it even now outperforms highly-optimized MPI implementation on intra-node communication due to its support for non-copy communication. The similarity of the CDS1 architecture to that of other projects such as UNET and TRAP suggests that the inter-node performance can be increased significantly to surpass MPI or PVM, and it may be possible to migrate some of its functionality to communication controllers.

  10. Statistical Learning Induces Discrete Shifts in the Allocation of Working Memory Resources

    ERIC Educational Resources Information Center

    Umemoto, Akina; Scolari, Miranda; Vogel, Edward K.; Awh, Edward

    2010-01-01

    Observers can voluntarily select which items are encoded into working memory, and the efficiency of this process strongly predicts memory capacity. Nevertheless, the present work suggests that voluntary intentions do not exclusively determine what is encoded into this online workspace. Observers indicated whether any items from a briefly stored…

  11. NAS Applications and Advanced Algorithms

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Biswas, Rupak; VanDerWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.

  12. Aging Memory Is "Not" a Limiting Factor for Lifelong Learning

    ERIC Educational Resources Information Center

    Lalovic, Dejan; Gvozdenovic, Vasilije

    2015-01-01

    Efficient memory is one of the necessary cognitive potentials required for virtually every form of lifelong learning. In this contribution we first briefly review and summarize state of the art of knowledge on memory and related cognitive functions in normal aging. Then we critically discuss a relatively short inventory of clinical, psychometric,…

  13. Cache write generate for parallel image processing on shared memory architectures.

    PubMed

    Wittenbrink, C M; Somani, A K; Chen, C H

    1996-01-01

    We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.

  14. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE PAGES

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-25

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  15. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krogel, Jaron T.; Reboredo, Fernando A.

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less

  16. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  17. Public Sector Reform and Governance for Adaptation: Implications of New Public Management for Adaptive Capacity in Mexico and Norway

    NASA Astrophysics Data System (ADS)

    Eakin, Hallie; Eriksen, Siri; Eikeland, Per-Ove; Øyen, Cecilie

    2011-03-01

    Although many governments are assuming the responsibility of initiating adaptation policy in relation to climate change, the compatibility of "governance-for-adaptation" with the current paradigms of public administration has generally been overlooked. Over the last several decades, countries around the globe have embraced variants of the philosophy of administration broadly called "New Public Management" (NPM) in an effort to improve administrative efficiencies and the provision of public services. Using evidence from a case study of reforms in the building sector in Norway, and a case study of water and flood risk management in central Mexico, we analyze the implications of the adoption of the tenets of NPM for adaptive capacity. Our cases illustrate that some of the key attributes associated with governance for adaptation—namely, technical and financial capacities; institutional memory, learning and knowledge; and participation and accountability—have been eroded by NPM reforms. Despite improvements in specific operational tasks of the public sector in each case, we show that the success of NPM reforms presumes the existence of core elements of governance that have often been found lacking, including solid institutional frameworks and accountability. Our analysis illustrates the importance of considering both longer-term adaptive capacities and short-term efficiency goals in public sector administration reform.

  18. Public sector reform and governance for adaptation: implications of new public management for adaptive capacity in Mexico and Norway.

    PubMed

    Eakin, Hallie; Eriksen, Siri; Eikeland, Per-Ove; Øyen, Cecilie

    2011-03-01

    Although many governments are assuming the responsibility of initiating adaptation policy in relation to climate change, the compatibility of "governance-for-adaptation" with the current paradigms of public administration has generally been overlooked. Over the last several decades, countries around the globe have embraced variants of the philosophy of administration broadly called "New Public Management" (NPM) in an effort to improve administrative efficiencies and the provision of public services. Using evidence from a case study of reforms in the building sector in Norway, and a case study of water and flood risk management in central Mexico, we analyze the implications of the adoption of the tenets of NPM for adaptive capacity. Our cases illustrate that some of the key attributes associated with governance for adaptation--namely, technical and financial capacities; institutional memory, learning and knowledge; and participation and accountability--have been eroded by NPM reforms. Despite improvements in specific operational tasks of the public sector in each case, we show that the success of NPM reforms presumes the existence of core elements of governance that have often been found lacking, including solid institutional frameworks and accountability. Our analysis illustrates the importance of considering both longer-term adaptive capacities and short-term efficiency goals in public sector administration reform.

  19. Noise reduction in optically controlled quantum memory

    NASA Astrophysics Data System (ADS)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  20. Investigation and design of a Project Management Decision Support System for the 4950th Test Wing.

    DTIC Science & Technology

    1986-03-01

    all decision makers is the need for memory aids (reports, hand written notes, mental memory joggers, etc.). 4. Even in similar decision making ... memories to synthesize a decision- making process based on their individual styles, skills, and knowledge (Sprague, 1982: 106). Control mechanisms...representations shown in Figures 4.9 and 4.10 provide a means to this objective. By enabling a manager to make and record reasonable changes to

  1. Herbal medicine as a promising therapeutic approach for the management of vascular dementia: A systematic literature review.

    PubMed

    Ghorani-Azam, Adel; Sepahi, Samaneh; Khodaverdi, Elham; Mohajeri, Seyed Ahmad

    2018-05-22

    Vascular dementia (VaD) generally refers to memory deficits and cognitive abnormalities that are resulted from vascular disease. In this study, we aimed to systematically review the literature wherein therapeutic effects of medicinal plants have been studied on VaD. A systematic literature search was performed in the PubMed, Scopus, Web of Science, Google Scholar, and other databases using VaD, and medicinal plants as key terms. No strict inclusion criteria were defined, and almost all clinical studies were included. A total of 524 articles were found, of which only 28 relevant articles with 3461 studied patients were included to this systematic review. The results showed that medicinal plants, particularly Sancaijiangtang and Ginkgo biloba could improve behavioral and psychological symptoms, working memory, Mini-Mental State Examination, and activities of daily living as well as neuropsychiatric features. It was also shown that the age, average progression of the disease, and the type of folk medicines effective in treating the disease are important factors in the management of VaD. The results of this review indicated that herbal therapy can be a potential candidate in the treatment of VaD; however, further studies are needed to confirm such efficiency. Copyright © 2018 John Wiley & Sons, Ltd.

  2. On the practical efficiency of shape memory engines

    NASA Astrophysics Data System (ADS)

    McCormick, P. G.

    1987-02-01

    The effects of non-ideal behavior, i.e., thermal efficiencies less than perfect, on the efficiency of shape memory (SME) engines are analyzed. Account is taken of the temperature hysteresis between the forward and reverse transformation and the finite elastic compliance of the SM element and the engine. The temperature difference produced by a particular stress cycle and necessary to complete the cycle is quantified, along with the temperature penalty which arises from non-ideal behavior. The hysteresis, elastic compliance and low working strains in cycled materials are shown to yield low thermal efficiencies, e.g., 1.95 pct instead of 6.74 pct in the case of a 20 k hysteresis. Heat recycling can theoretically improve the efficiency to about 3.23 pct.

  3. Progress towards broadband Raman quantum memory in Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Saglamyurek, Erhan; Hrushevskyi, Taras; Smith, Benjamin; Leblanc, Lindsay

    2017-04-01

    Optical quantum memories are building blocks for quantum information technologies. Efficient and long-lived storage in combination with high-speed (broadband) operation are key features required for practical applications. While the realization has been a great challenge, Raman memory in Bose-Einstein condensates (BECs) is a promising approach, due to negligible decoherence from diffusion and collisions that leads to seconds-scale memory times, high efficiency due to large atomic density, the possibility for atom-chip integration with micro photonics, and the suitability of the far off-resonant Raman approach with storage of broadband photons (over GHz) [5]. Here we report our progress towards Raman memory in a BEC. We describe our apparatus recently built for producing BEC with 87Rb atoms, and present the observation of nearly pure BEC with 5x105 atoms at 40 nK. After showing our initial characterizations, we discuss the suitability of our system for Raman-based light storage in our BEC.

  4. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  5. Sleep-Dependent Memory Consolidation and Reconsolidation

    PubMed Central

    Stickgold, Robert; Walker, Matthew P.

    2009-01-01

    Molecular, cellular, and systems-level processes convert initial, labile memory representations into more permanent ones, available for continued reactivation and recall over extended periods of time. These processes of memory consolidation and reconsolidation are not all-or-none phenomena, but rather a continuing series of biological adjustments that enhance both the efficiency and utility of stored memories over time. In this chapter, we review the role of sleep in supporting these disparate but related processes. PMID:17470412

  6. A fast sequence assembly method based on compressed data structures.

    PubMed

    Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu

    2014-01-01

    Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.

  7. Memory effect, resolution, and efficiency measurements of an Al2O3 coated plastic scintillator used for radioxenon detection

    NASA Astrophysics Data System (ADS)

    Bläckberg, L.; Fritioff, T.; Mårtensson, L.; Nielsen, F.; Ringbom, A.; Sjöstrand, H.; Klintenberg, M.

    2013-06-01

    A cylindrical plastic scintillator cell, used for radioxenon monitoring within the verification regime of the Comprehensive Nuclear-Test-Ban Treaty, has been coated with 425 nm Al2O3 using low temperature Atomic Layer Deposition, and its performance has been evaluated. The motivation is to reduce the memory effect caused by radioxenon diffusing into the plastic scintillator material during measurements, resulting in an elevated detection limit. Measurements with the coated detector show both energy resolution and efficiency comparable to uncoated detectors, and a memory effect reduction of a factor of 1000. Provided that the quality of the detector is maintained for a longer period of time, Al2O3 coatings are believed to be a viable solution to the memory effect problem in question.

  8. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.

    PubMed

    Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine

    2015-03-15

    Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.

  9. A memory efficient user interface for CLIPS micro-computer applications

    NASA Technical Reports Server (NTRS)

    Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin

    1990-01-01

    The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.

  10. Working Memory and Processing Efficiency in Children's Reasoning.

    ERIC Educational Resources Information Center

    Halford, Graeme S.; And Others

    A series of studies was conducted to determine whether children's reasoning is capacity-limited and whether any such capacity, if it exists, is based on the working memory system. An N-term series (transitive inference) was used as the primary task in an interference paradigm. A concurrent short-term memory load was employed as the secondary task.…

  11. The Development of Strategy Use in Elementary School Children: Working Memory and Individual Differences

    ERIC Educational Resources Information Center

    Imbo, Ineke; Vandierendonck, Andre

    2007-01-01

    The current study tested the development of working memory involvement in children's arithmetic strategy selection and strategy efficiency. To this end, an experiment in which the dual-task method and the choice/no-choice method were combined was administered to 10- to 12-year-olds. Working memory was needed in retrieval, transformation, and…

  12. Compression in Visual Working Memory: Using Statistical Regularities to Form More Efficient Memory Representations

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.

    2009-01-01

    The information that individuals can hold in working memory is quite limited, but researchers have typically studied this capacity using simple objects or letter strings with no associations between them. However, in the real world there are strong associations and regularities in the input. In an information theoretic sense, regularities…

  13. Word-Decoding Skill Interacts with Working Memory Capacity to Influence Inference Generation during Reading

    ERIC Educational Resources Information Center

    Hamilton, Stephen; Freed, Erin; Long, Debra L.

    2016-01-01

    The aim of this study was to examine predictions derived from a proposal about the relation between word-decoding skill and working memory capacity, called verbal efficiency theory. The theory states that poor word representations and slow decoding processes consume resources in working memory that would otherwise be used to execute high-level…

  14. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  15. How Quickly They Forget: The Relationship between Forgetting and Working Memory Performance

    ERIC Educational Resources Information Center

    Bayliss, Donna M.; Jarrold, Christopher

    2015-01-01

    This study examined the contribution of individual differences in rate of forgetting to variation in working memory performance in children. One hundred and twelve children (mean age 9 years 4 months) completed 2 tasks designed to measure forgetting, as well as measures of working memory, processing efficiency, and short-term storage ability.…

  16. Seven-Year-Olds Allocate Attention Like Adults Unless Working Memory Is Overloaded

    ERIC Educational Resources Information Center

    Cowan, Nelson; Morey, Candice C.; AuBuchon, Angela M.; Zwilling, Christopher E.; Gilchrist, Amanda L.

    2010-01-01

    Previous studies have indicated that visual working memory performance increases with age in childhood, but it is not clear why. One main hypothesis has been that younger children are less efficient in their attention; specifically, they are less able to exclude irrelevant items from working memory to make room for relevant items. We examined this…

  17. Appearance of the two-way shape-memory effect in a nitinol spring subjected to temperature and deformation cycling

    NASA Astrophysics Data System (ADS)

    Manjavidze, A. G.; Barnov, V. A.; Jorjishvili, L. I.; Sobolevskaya, S. V.

    2008-03-01

    The properties of a cylindrical spiral spring of nitinol (shape-memory alloy) are studied. When this spring is used as a working element in a rotary martensitic engine, the appearance of the two-way shape-memory effect in it is shown to decrease the engine operation efficiency.

  18. Implementation of a parallel unstructured Euler solver on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.

    1992-01-01

    An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.

  19. Spermidine boosts autophagy to protect from synapse aging.

    PubMed

    Bhukel, Anuradha; Madeo, Frank; Sigrist, Stephan J

    2017-02-01

    All animals form memories to adapt their behavior in a context-dependent manner. With increasing age, however, forming new memories becomes less efficient. While synaptic plasticity promotes memory formation, the etiology of age-induced memory formation remained enigmatic. Previous work showed that simple feeding of polyamine spermidine protects from age-induced memory impairment in Drosophila. Most recent work now shows that spermidine operates directly at synapses, allowing for an autophagy-dependent homeostatic regulation of presynaptic specializations. How exactly autophagic regulations intersect with synaptic plasticity should be an interesting subject for future research.

  20. A Memory Efficient Network Encryption Scheme

    NASA Astrophysics Data System (ADS)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  1. Selective scanpath repetition during memory-guided visual search

    PubMed Central

    Wynn, Jordana S.; Bone, Michael B.; Dragan, Michelle C.; Hoffman, Kari L.; Buchsbaum, Bradley R.; Ryan, Jennifer D.

    2016-01-01

    ABSTRACT Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or “scanpath” elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1–V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity. PMID:27570471

  2. Factors affecting reorganisation of memory encoding networks in temporal lobe epilepsy

    PubMed Central

    Sidhu, M.K.; Stretton, J.; Winston, G.P.; Symms, M.; Thompson, P.J.; Koepp, M.J.; Duncan, J.S.

    2015-01-01

    Summary Aims In temporal lobe epilepsy (TLE) due to hippocampal sclerosis reorganisation in the memory encoding network has been consistently described. Distinct areas of reorganisation have been shown to be efficient when associated with successful subsequent memory formation or inefficient when not associated with successful subsequent memory. We investigated the effect of clinical parameters that modulate memory functions: age at onset of epilepsy, epilepsy duration and seizure frequency in a large cohort of patients. Methods We studied 53 patients with unilateral TLE and hippocampal sclerosis (29 left). All participants performed a functional magnetic resonance imaging memory encoding paradigm of faces and words. A continuous regression analysis was used to investigate the effects of age at onset of epilepsy, epilepsy duration and seizure frequency on the activation patterns in the memory encoding network. Results Earlier age at onset of epilepsy was associated with left posterior hippocampus activations that were involved in successful subsequent memory formation in left hippocampal sclerosis patients. No association of age at onset of epilepsy was seen with face encoding in right hippocampal sclerosis patients. In both left hippocampal sclerosis patients during word encoding and right hippocampal sclerosis patients during face encoding, shorter duration of epilepsy and lower seizure frequency were associated with medial temporal lobe activations that were involved in successful memory formation. Longer epilepsy duration and higher seizure frequency were associated with contralateral extra-temporal activations that were not associated with successful memory formation. Conclusion Age at onset of epilepsy influenced verbal memory encoding in patients with TLE due to hippocampal sclerosis in the speech-dominant hemisphere. Shorter duration of epilepsy and lower seizure frequency were associated with less disruption of the efficient memory encoding network whilst longer duration and higher seizure frequency were associated with greater, inefficient, extra-temporal reorganisation. PMID:25616449

  3. 78 FR 23866 - Airworthiness Directives; the Boeing Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... operational software in the cabin management system, and loading new software into the mass memory card. The...-200 and -300 series airplanes. The proposed AD would have required installing new operational software in the cabin management system, and loading new software into the mass memory card. Since the...

  4. Hemiboreal forest: natural disturbances and the importance of ecosystem legacies to management

    Treesearch

    Kalev Jogiste; Henn Korjus; John Stanturf; Lee E. Frelich; Endijs Baders; Janis Donis; Aris Jansons; Ahto Kangur; Kajar Koster; Diana Laarmann; Tiit Maaten; Vitas Marozas; Marek Metslaid; Kristi Nigul; Olga Polyachenko; Tiit Randveer; Floortje Vodde

    2017-01-01

    The condition of forest ecosystems depends on the temporal and spatial pattern of management interventions and natural disturbances. Remnants of previous conditions persisting after disturbances, or ecosystem legacies, collectively comprise ecosystem memory. Ecosystem memory in turn contributes to resilience and possibilities of ecosystem reorganization...

  5. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yier

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less

  6. Cross talk and diffraction efficiency in angular multiplexed memories using improved polypeptide

    NASA Astrophysics Data System (ADS)

    Ramenah, Harry K.; Bertrand, Paul; Soubari, E. H.; Meyrueis, Patrick

    1996-12-01

    We studied energy coupling between gratings and angularly multiplexed 20 gratings with a uniform diffraction efficiency within 25 micrometer layer thickness of dichromated gelatin. The dependence of diffraction efficiency on beam ratio is given. We recorded a matrix form memory of nxmxp elements, where n and m are the rows and columns and p the number of multiplexes. For indication only, n equals m equals 10, p equals 20, the surface area of the matrix is 1 cm2. Color diffractive images and digital data are illustrated as well as video, cartography and medical applications.

  7. The impact of Moore's Law and loss of Dennard scaling: Are DSP SoCs an energy efficient alternative to x86 SoCs?

    NASA Astrophysics Data System (ADS)

    Johnsson, L.; Netzer, G.

    2016-10-01

    Moore's law, the doubling of transistors per unit area for each CMOS technology generation, is expected to continue throughout the decade, while Dennard voltage scaling resulting in constant power per unit area stopped about a decade ago. The semiconductor industry's response to the loss of Dennard scaling and the consequent challenges in managing power distribution and dissipation has been leveled off clock rates, a die performance gain reduced from about a factor of 2.8 to 1.4 per technology generation, and multi-core processor dies with increased cache sizes. Increased caches sizes offers performance benefits for many applications as well as energy savings. Accessing data in cache is considerably more energy efficient than main memory accesses. Further, caches consume less power than a corresponding amount of functional logic. As feature sizes continue to be scaled down an increasing fraction of the die must be “underutilized” or “dark” due to power constraints. With power being a prime design constraint there is a concerted effort to find significantly more energy efficient chip architectures than dominant in servers today, with chips potentially incorporating several types of cores to cover a range of applications, or different functions in an application, as is already common for the mobile processor market. Digital Signal Processors (DSPs), largely targeting the embedded and mobile processor markets, typically have been designed for a power consumption of 10% or less of a typical x86 CPU, yet with much more than 10% of the floating-point capability of the same technology generation x86 CPUs. Thus, DSPs could potentially offer an energy efficient alternative to x86 CPUs. Here we report an assessment of the Texas Instruments TMS320C6678 DSP in regards to its energy efficiency for two common HPC benchmarks: STREAM (memory system benchmark) and HPL (CPU benchmark)

  8. Carbon nanomaterials for non-volatile memories

    NASA Astrophysics Data System (ADS)

    Ahn, Ethan C.; Wong, H.-S. Philip; Pop, Eric

    2018-03-01

    Carbon can create various low-dimensional nanostructures with remarkable electronic, optical, mechanical and thermal properties. These features make carbon nanomaterials especially interesting for next-generation memory and storage devices, such as resistive random access memory, phase-change memory, spin-transfer-torque magnetic random access memory and ferroelectric random access memory. Non-volatile memories greatly benefit from the use of carbon nanomaterials in terms of bit density and energy efficiency. In this Review, we discuss sp2-hybridized carbon-based low-dimensional nanostructures, such as fullerene, carbon nanotubes and graphene, in the context of non-volatile memory devices and architectures. Applications of carbon nanomaterials as memory electrodes, interfacial engineering layers, resistive-switching media, and scalable, high-performance memory selectors are investigated. Finally, we compare the different memory technologies in terms of writing energy and time, and highlight major challenges in the manufacturing, integration and understanding of the physical mechanisms and material properties.

  9. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  10. Compression in visual working memory: using statistical regularities to form more efficient memory representations.

    PubMed

    Brady, Timothy F; Konkle, Talia; Alvarez, George A

    2009-11-01

    The information that individuals can hold in working memory is quite limited, but researchers have typically studied this capacity using simple objects or letter strings with no associations between them. However, in the real world there are strong associations and regularities in the input. In an information theoretic sense, regularities introduce redundancies that make the input more compressible. The current study shows that observers can take advantage of these redundancies, enabling them to remember more items in working memory. In 2 experiments, covariance was introduced between colors in a display so that over trials some color pairs were more likely to appear than other color pairs. Observers remembered more items from these displays than from displays where the colors were paired randomly. The improved memory performance cannot be explained by simply guessing the high-probability color pair, suggesting that observers formed more efficient representations to remember more items. Further, as observers learned the regularities, their working memory performance improved in a way that is quantitatively predicted by a Bayesian learning model and optimal encoding scheme. These results suggest that the underlying capacity of the individuals' working memory is unchanged, but the information they have to remember can be encoded in a more compressed fashion. Copyright 2009 APA

  11. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  12. High efficiency Raman memory by suppressing radiation trapping

    NASA Astrophysics Data System (ADS)

    Thomas, S. E.; Munns, J. H. D.; Kaczmarek, K. T.; Qiu, C.; Brecht, B.; Feizpour, A.; Ledingham, P. M.; Walmsley, I. A.; Nunn, J.; Saunders, D. J.

    2017-06-01

    Raman interactions in alkali vapours are used in applications such as atomic clocks, optical signal processing, generation of squeezed light and Raman quantum memories for temporal multiplexing. To achieve a strong interaction the alkali ensemble needs both a large optical depth and a high level of spin-polarisation. We implement a technique known as quenching using a molecular buffer gas which allows near-perfect spin-polarisation of over 99.5 % in caesium vapour at high optical depths of up to ˜ 2× {10}5; a factor of 4 higher than can be achieved without quenching. We use this system to explore efficient light storage with high gain in a GHz bandwidth Raman memory.

  13. A direct method for unfolding the resolution function from measurements of neutron induced reactions

    NASA Astrophysics Data System (ADS)

    Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration

    2017-12-01

    The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.

  14. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  15. Distributed Memory Parallel Computing with SEAWAT

    NASA Astrophysics Data System (ADS)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources. Speed-ups up to 40 were obtained with the new PKS solver.

  16. Memory and Learning: A Case Study.

    ERIC Educational Resources Information Center

    Webster, Raymond E.

    1986-01-01

    The usefulness of the Learning Efficency Test (LET), an approach to assessing the learning efficiency and short-term memory recall capacity in children, is described via a case study demonstrating the test's use to develop instructional strategies. (CL)

  17. Face classification using electronic synapses

    NASA Astrophysics Data System (ADS)

    Yao, Peng; Wu, Huaqiang; Gao, Bin; Eryilmaz, Sukru Burc; Huang, Xueyao; Zhang, Wenqiang; Zhang, Qingtian; Deng, Ning; Shi, Luping; Wong, H.-S. Philip; Qian, He

    2017-05-01

    Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.

  18. Modulation of selective attention by polarity-specific tDCS effects.

    PubMed

    Pecchinenda, Anna; Ferlazzo, Fabio; Lavidor, Michal

    2015-02-01

    Selective attention relies on working memory to maintain an attention set of task priorities. Consequently, selective attention is more efficient when working memory resources are not depleted. However, there is some evidence that distractors are processed even when working memory load is low. We used tDCS to assess whether boosting the activity of the Dorsolateral Prefrontal Cortex (DLPFC), involved in selective attention and working memory, would reduce interference from emotional distractors. Findings showed that anodal tDCS over the DLPFC was not sufficient to reduce interference from angry distractors. In contrast, cathodal tDCS over the DLPFC reduced interference from happy distractors. These findings show that altering the DLPFC activity is not sufficient to establish top-down control and increase selective attention efficiency. Although, when the neural signal in the DLPFC is altered by cathodal tDCS, interference from emotional distractors is reduced, leading to an improved performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Face classification using electronic synapses.

    PubMed

    Yao, Peng; Wu, Huaqiang; Gao, Bin; Eryilmaz, Sukru Burc; Huang, Xueyao; Zhang, Wenqiang; Zhang, Qingtian; Deng, Ning; Shi, Luping; Wong, H-S Philip; Qian, He

    2017-05-12

    Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.

  20. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  1. Broadband multiresonator quantum memory-interface.

    PubMed

    Moiseev, S A; Gerasimov, K I; Latypov, R R; Perminov, N S; Petrovnin, K V; Sherstyukov, O N

    2018-03-05

    In this paper we experimentally demonstrated a broadband scheme of the multiresonator quantum memory-interface. The microwave photonic scheme consists of the system of mini-resonators strongly interacting with a common broadband resonator coupled with the external waveguide. We have implemented the impedance matched quantum storage in this scheme via controllable tuning of the mini-resonator frequencies and coupling of the common resonator with the external waveguide. Proof-of-principal experiment has been demonstrated for broadband microwave pulses when the quantum efficiency of 16.3% was achieved at room temperature. By using the obtained experimental spectroscopic data, the dynamics of the signal retrieval has been simulated and promising results were found for high-Q mini-resonators in microwave and optical frequency ranges. The results pave the way for the experimental implementation of broadband quantum memory-interface with quite high efficiency η > 0.99 on the basis of modern technologies, including optical quantum memory at room temperature.

  2. Interactive Effects of Working Memory Self-Regulatory Ability and Relevance Instructions on Text Processing

    ERIC Educational Resources Information Center

    Hamilton, Nancy Jo

    2012-01-01

    Reading is a process that requires the enactment of many cognitive processes. Each of these processes uses a certain amount of working memory resources, which are severely constrained by biology. More efficiency in the function of working memory may mediate the biological limits of same. Reading relevancy instructions may be one such method to…

  3. A system-level approach for embedded memory robustness

    NASA Astrophysics Data System (ADS)

    Mariani, Riccardo; Boschi, Gabriele

    2005-11-01

    New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.

  4. Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.

    PubMed

    Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos

    2015-01-01

    The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.

  5. Why are you late? Investigating the role of time management in time-based prospective memory.

    PubMed

    Waldum, Emily R; McDaniel, Mark A

    2016-08-01

    Time-based prospective memory tasks (TBPM) are those that are to be performed at a specific future time. Contrary to typical laboratory TBPM tasks (e.g., hit the Z key every 5 min), many real-world TBPM tasks require more complex time-management processes. For instance, to attend an appointment on time, one must estimate the duration of the drive to the appointment and then use this estimate to create and execute a secondary TBPM intention (e.g., "I need to start driving by 1:30 to make my 2:00 appointment on time"). Future under- and overestimates of drive time can lead to inefficient TBPM performance with the former lending to missed appointments and the latter to long stints in the waiting room. Despite the common occurrence of complex TBPM tasks in everyday life, to date, no studies have investigated how components of time management, including time estimation, affect behavior in such complex TBPM tasks. Therefore, the current study aimed to investigate timing biases in both older and younger adults and, further, to determine how such biases along with additional time management components including planning and plan fidelity influence complex TBPM performance. Results suggest for the first time that younger and older adults do not always utilize similar timing strategies, and as a result, can produce differential timing biases under the exact same environmental conditions. These timing biases, in turn, play a vital role in how efficiently both younger and older adults perform a later TBPM task that requires them to utilize their earlier time estimate. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Improving family medicine resident training in dementia care: an experiential learning opportunity in Primary Care Collaborative Memory Clinics.

    PubMed

    Lee, Linda; Weston, W Wayne; Hillier, Loretta; Archibald, Douglas; Lee, Joseph

    2018-06-21

    Family physicians often find themselves inadequately prepared to manage dementia. This article describes the curriculum for a resident training intervention in Primary Care Collaborative Memory Clinics (PCCMC), outlines its underlying educational principles, and examines its impact on residents' ability to provide dementia care. PCCMCs are family physician-led interprofessional clinic teams that provide evidence-informed comprehensive assessment and management of memory concerns. Within PCCMCs residents learn to apply a structured approach to assessment, diagnosis, and management; training consists of a tutorial covering various topics related to dementia followed by work-based learning within the clinic. Significantly more residents who trained in PCCMCs (sample = 98), as compared to those in usual training programs (sample = 35), reported positive changes in knowledge, ability, and confidence in ability to assess and manage memory problems. The PCCMC training intervention for family medicine residents provides a significant opportunity for residents to learn about best clinical practices and interprofessional care needed for optimal dementia care integrated within primary care practice.

  7. An Investigation of Unified Memory Access Performance in CUDA

    PubMed Central

    Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin

    2015-01-01

    Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668

  8. A 12-Week Physical and Cognitive Exercise Program Can Improve Cognitive Function and Neural Efficiency in Community-Dwelling Older Adults: A Randomized Controlled Trial.

    PubMed

    Nishiguchi, Shu; Yamada, Minoru; Tanigawa, Takanori; Sekiyama, Kaoru; Kawagoe, Toshikazu; Suzuki, Maki; Yoshikawa, Sakiko; Abe, Nobuhito; Otsuka, Yuki; Nakai, Ryusuke; Aoyama, Tomoki; Tsuboyama, Tadao

    2015-07-01

    To investigate whether a 12-week physical and cognitive exercise program can improve cognitive function and brain activation efficiency in community-dwelling older adults. Randomized controlled trial. Kyoto, Japan. Community-dwelling older adults (N = 48) were randomized into an exercise group (n = 24) and a control group (n = 24). Exercise group participants received a weekly dual task-based multimodal exercise class in combination with pedometer-based daily walking exercise during the 12-week intervention phase. Control group participants did not receive any intervention and were instructed to spend their time as usual during the intervention phase. The outcome measures were global cognitive function, memory function, executive function, and brain activation (measured using functional magnetic resonance imaging) associated with visual short-term memory. Exercise group participants had significantly greater postintervention improvement in memory and executive functions than the control group (P < .05). In addition, after the intervention, less activation was found in several brain regions associated with visual short-term memory, including the prefrontal cortex, in the exercise group (P < .001, uncorrected). A 12-week physical and cognitive exercise program can improve the efficiency of brain activation during cognitive tasks in older adults, which is associated with improvements in memory and executive function. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  9. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  10. Causal evidence for mnemonic metacognition in human precuneus.

    PubMed

    Ye 叶群, Qun; Zou 邹富渟, Futing; Lau 劉克頑, Hakwan; Hu 胡谊, Yi; Kwok 郭思齊, Sze Chai

    2018-06-19

    Metacognition is the capacity to introspectively monitor and control one's own cognitive processes. Previous anatomical and functional neuroimaging findings implicated the important role of the precuneus in metacognition processing, especially during mnemonic tasks. However, the issue of whether this medial parietal cortex is a domain-specific region that supports mnemonic metacognition remains controversial. Here, we focally disrupted this parietal area with repetitive transcranial magnetic stimulation in healthy human participants of both sexes, seeking to ascertain its functional necessity for metacognition in memory versus perceptual decisions. Perturbing precuneal activity selectively impaired metacognitive efficiency of temporal-order memory judgement, but not perceptual discrimination. Moreover, the correlation in individuals' metacognitive efficiency between domains disappeared when the precuneus was perturbed. Taken together, these findings provide evidence reinforcing the notion that the precuneal region plays an important role in mediating metacognition of episodic memory retrieval. SIGNIFICANCE STATEMENT Theories on the neural basis of metacognition have thus far been largely centered on the role of the prefrontal cortex. Here we refined the theoretical framework through characterizing a unique precuneal involvement in mnemonic metacognition with a noninvasive but inferentially powerful method: transcranial magnetic stimulation. By quantifying meta-cognitive efficiency across two distinct domains (memory vs. perception) that are matched for stimulus characteristics, we reveal an instrumental role of the precuneus in mnemonic metacognition. This causal evidence corroborates ample clinical reports that parietal lobe lesions often produce inaccurate self-reports of confidence in memory recollection and establish the precuneus as a nexus for the introspective ability to evaluate the success of memory judgment in humans. Copyright © 2018 the authors.

  11. The effect of interference on temporal order memory for random and fixed sequences in nondemented older adults.

    PubMed

    Tolentino, Jerlyn C; Pirogovsky, Eva; Luu, Trinh; Toner, Chelsea K; Gilbert, Paul E

    2012-05-21

    Two experiments tested the effect of temporal interference on order memory for fixed and random sequences in young adults and nondemented older adults. The results demonstrate that temporal order memory for fixed and random sequences is impaired in nondemented older adults, particularly when temporal interference is high. However, temporal order memory for fixed sequences is comparable between older adults and young adults when temporal interference is minimized. The results suggest that temporal order memory is less efficient and more susceptible to interference in older adults, possibly due to impaired temporal pattern separation.

  12. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less

  13. BLACKCOMB2: Hardware-software co-design for non-volatile memory in exascale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudge, Trevor

    This work was part of a larger project, Blackcomb2, centered at Oak Ridge National Labs (Jeff Vetter PI) to investigate the opportunities for replacing or supplementing DRAM main memory with nonvolatile memory (NVmemory) in Exascale memory systems. The goal was to reduce the energy consumed by in future supercomputer memory systems and to improve their resiliency. Building on the accomplishments of the original Blackcomb Project, funded in 2010, the goal for Blackcomb2 was to identify, evaluate, and optimize the most promising emerging memory technologies, architecture hardware and software technologies, which are essential to provide the necessary memory capacity, performance, resilience,more » and energy efficiency in Exascale systems. Capacity and energy are the key drivers.« less

  14. Optical mass memories

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1976-01-01

    Optical and magnetic variants in the design of trillion-bit read/write memories are compared and tabulated. Components and materials suitable for a random access read/write nonmoving memory system are examined, with preference given to holography and photoplastic materials. Advantages and deficiencies of photoplastics are reviewed. Holographic page composer design, essential features of an optical memory with no moving parts, fiche-oriented random access memory design, and materials suitable for an efficient photoplastic fiche are considered. The optical variants offer advantages in lower volume and weight at data transfer rates near 1 Mbit/sec, but power drain is of the same order as for the magnetic variants (tape memory, disk memory). The mechanical properties of photoplastic film materials still leave much to be desired.

  15. Community-based memorials to September 11, 2001: environmental stewardship as memory work

    Treesearch

    Erika S. Svendsen; Lindsay K. Campbell

    2014-01-01

    This chapter investigates how people use trees, parks, gardens, and other natural resources as raw materials in and settings for memorials to September 11, 2001. In particular, we focus on 'found space living memorials', which we define as sites that are community-managed, re-appropriated from their prior use, often carved out of the public right-of-way, and...

  16. Memory dynamics under stress.

    PubMed

    Quaedflieg, Conny W E M; Schwabe, Lars

    2018-03-01

    Stressful events have a major impact on memory. They modulate memory formation in a time-dependent manner, closely linked to the temporal profile of action of major stress mediators, in particular catecholamines and glucocorticoids. Shortly after stressor onset, rapidly acting catecholamines and fast, non-genomic glucocorticoid actions direct cognitive resources to the processing and consolidation of the ongoing threat. In parallel, control of memory is biased towards rather rigid systems, promoting habitual forms of memory allowing efficient processing under stress, at the expense of "cognitive" systems supporting memory flexibility and specificity. In this review, we discuss the implications of this shift in the balance of multiple memory systems for the dynamics of the memory trace. Specifically, stress appears to hinder the incorporation of contextual details into the memory trace, to impede the integration of new information into existing knowledge structures, to impair the flexible generalisation across past experiences, and to hamper the modification of memories in light of new information. Delayed, genomic glucocorticoid actions might reverse the control of memory, thus restoring homeostasis and "cognitive" control of memory again.

  17. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  18. Long-term moderate elevation of corticosterone facilitates avian food-caching behaviour and enhances spatial memory.

    PubMed

    Pravosudov, Vladimir V

    2003-12-22

    It is widely assumed that chronic stress and corresponding chronic elevations of glucocorticoid levels have deleterious effects on animals' brain functions such as learning and memory. Some animals, however, appear to maintain moderately elevated levels of glucocorticoids over long periods of time under natural energetically demanding conditions, and it is not clear whether such chronic but moderate elevations may be adaptive. I implanted wild-caught food-caching mountain chickadees (Poecile gambeli), which rely at least in part on spatial memory to find their caches, with 90-day continuous time-release corticosterone pellets designed to approximately double the baseline corticosterone levels. Corticosterone-implanted birds cached and consumed significantly more food and showed more efficient cache recovery and superior spatial memory performance compared with placebo-implanted birds. Thus, contrary to prevailing assumptions, long-term moderate elevations of corticosterone appear to enhance spatial memory in food-caching mountain chickadees. These results suggest that moderate chronic elevation of corticosterone may serve as an adaptation to unpredictable environments by facilitating feeding and food-caching behaviour and by improving cache-retrieval efficiency in food-caching birds.

  19. Accessing sparse arrays in parallel memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, U.; Gajski, D.; Kuck, D.

    The concept of dense and sparse execution of arrays is introduced. Arrays themselves can be stored in a dense or sparse manner in a parallel memory with m memory modules. The paper proposes hardware for speeding up the execution of array operations of the form c(c/sub 0/+ci)=a(a/sub 0/+ai) op b(b/sub 0/+bi), where a/sub 0/, a, b/sub 0/, b, c/sub 0/, c are integer constants and i is an index variable. The hardware handles 'sparse execution', in which the operation op is not executed for every value of i. The hardware also makes provision for 'sparse storage', in which memory spacemore » is not provided for every array element. It is shown how to access array elements of the above form without conflict in an efficient way. The efficiency is obtained by using some specialised units which are basically smart memories with priority detection, one's counting or associative searching. Generalisation to multidimensional arrays is shown possible under restrictions defined in the paper. 12 references.« less

  20. The relationship between dominance, corticosterone, memory, and food caching in mountain chickadees (Poecile gambeli).

    PubMed

    Pravosudov, Vladimir V; Mendoza, Sally P; Clayton, Nicola S

    2003-08-01

    It has been hypothesized that in avian social groups subordinate individuals should maintain more energy reserves than dominants, as an insurance against increased perceived risk of starvation. Subordinates might also have elevated baseline corticosterone levels because corticosterone is known to facilitate fattening in birds. Recent experiments showed that moderately elevated corticosterone levels resulting from unpredictable food supply are correlated with enhanced cache retrieval efficiency and more accurate performance on a spatial memory task. Given the correlation between corticosterone and memory, a further prediction is that subordinates might be more efficient at cache retrieval and show more accurate performance on spatial memory tasks. We tested these predictions in dominant-subordinate pairs of mountain chickadees (Poecile gambeli). Each pair was housed in the same cage but caching behavior was tested individually in an adjacent aviary to avoid the confounding effects of small spaces in which birds could unnaturally and directly influence each other's behavior. In sharp contrast to our hypothesis, we found that subordinate chickadees cached less food, showed less efficient cache retrieval, and performed significantly worse on the spatial memory task than dominants. Although the behavioral differences could have resulted from social stress of subordination, and dominant birds reached significantly higher levels of corticosterone during their response to acute stress compared to subordinates, there were no significant differences between dominants and subordinates in baseline levels or in the pattern of adrenocortical stress response. We find no evidence, therefore, to support the hypothesis that subordinate mountain chickadees maintain elevated baseline corticosterone levels whereas lower caching rates and inferior cache retrieval efficiency might contribute to reduced survival of subordinates commonly found in food-caching parids.

  1. Fast Magnetoresistive Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.

    1991-01-01

    Magnetoresistive binary digital memories of proposed new type expected to feature high speed, nonvolatility, ability to withstand ionizing radiation, high density, and low power. In memory cell, magnetoresistive effect exploited more efficiently by use of ferromagnetic material to store datum and adjacent magnetoresistive material to sense datum for readout. Because relative change in sensed resistance between "zero" and "one" states greater, shorter sampling and readout access times achievable.

  2. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  3. The Development of Memory Efficiency and Value-Directed Remembering across the Life Span: A Cross-Sectional Study of Memory and Selectivity

    ERIC Educational Resources Information Center

    Castel, Alan D.; Humphreys, Kathryn L.; Lee, Steve S.; Galvan, Adriana; Balota, David A.; McCabe, David P.

    2011-01-01

    Although attentional control and memory change considerably across the life span, no research has examined how the ability to strategically remember important information (i.e., value-directed remembering) changes from childhood to old age. The present study examined this in different age groups across the life span (N = 320, 5-96 years old). A…

  4. Interference due to shared features between action plans is influenced by working memory span.

    PubMed

    Fournier, Lisa R; Behmer, Lawrence P; Stubblefield, Alexandra M

    2014-12-01

    In this study, we examined the interactions between the action plans that we hold in memory and the actions that we carry out, asking whether the interference due to shared features between action plans is due to selection demands imposed on working memory. Individuals with low and high working memory spans learned arbitrary motor actions in response to two different visual events (A and B), presented in a serial order. They planned a response to the first event (A) and while maintaining this action plan in memory they then executed a speeded response to the second event (B). Afterward, they executed the action plan for the first event (A) maintained in memory. Speeded responses to the second event (B) were delayed when it shared an action feature (feature overlap) with the first event (A), relative to when it did not (no feature overlap). The size of the feature-overlap delay was greater for low-span than for high-span participants. This indicates that interference due to overlapping action plans is greater when fewer working memory resources are available, suggesting that this interference is due to selection demands imposed on working memory. Thus, working memory plays an important role in managing current and upcoming action plans, at least for newly learned tasks. Also, managing multiple action plans is compromised in individuals who have low versus high working memory spans.

  5. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  6. A Case for Tamper-Resistant and Tamper-Evident Computer Systems

    DTIC Science & Technology

    2007-02-01

    such as Kerberos is hard to apply [2] B . Gassend, G. Sub, D. Clarke, M. Dijk, and S. Devadas . Caches and Hash Trees for Efficient Memory Integrity...the block’s data from DRAM. For authentication, Merkle [14] G. Suh, D. Clarke, B . Gassend, M. van Dijk, and S. Devadas . Efficient Memory Integrity...wwi4serverwatch.com/news/article.php/ tion where a data block is encrypted or decrypted through an XOR 1399451, 2000. [11] B . Rogers, Y. Solihin

  7. Efficient detection of dangling pointer error for C/C++ programs

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  8. Aging and Memory as Discrimination: Influences of Encoding Specificity, Cue Overload, and Prior Knowledge

    PubMed Central

    2016-01-01

    From the perspective of memory-as-discrimination, whether a cue leads to correct retrieval simultaneously depends on the cue’s relationship to (a) the memory target and (b) the other retrieval candidates. A corollary of the view is that increasing encoding-retrieval match may only help memory if it improves the cue’s capacity to discriminate the target from competitors. Here, age differences in this discrimination process were assessed by manipulating the overlap between cues present at encoding and retrieval orthogonally with cue–target distinctiveness. In Experiment 1, associative memory differences for cue–target sets between young and older adults were minimized through training and retrieval efficiency was assessed through response time. In Experiment 2, age-group differences in associative memory were left to vary and retrieval efficiency was assessed through accuracy. Both experiments showed age-invariance in memory-as-discrimination: cues increasing encoding-retrieval match did not benefit memory unless they also improved discrimination between the target and competitors. Predictions based on the age-related associative deficit were also supported: prior knowledge alleviated age-related associative deficits (Experiment 1), and increasing encoding-retrieval match benefited older more than young adults (Experiment 2). We suggest that the latter occurred because older adults’ associative memory deficits reduced the impact of competing retrieval candidates—hence the age-related benefit was not attributable to encoding-retrieval match per se, but rather it was a joint function of an increased probability of the cue connecting to the target combined with a decrease in competing retrieval candidates. PMID:27831714

  9. The Control of Single-color and Multiple-color Visual Search by Attentional Templates in Working Memory and in Long-term Memory.

    PubMed

    Grubert, Anna; Carlisle, Nancy B; Eimer, Martin

    2016-12-01

    The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.

  10. The effects of aging on ERP correlates of source memory retrieval for self-referential information.

    PubMed

    Dulas, Michael R; Newsome, Rachel N; Duarte, Audrey

    2011-03-04

    Numerous behavioral studies have suggested that normal aging negatively affects source memory accuracy for various kinds of associations. Neuroimaging evidence suggests that less efficient retrieval processing (temporally delayed and attenuated) may contribute to these impairments. Previous aging studies have not compared source memory accuracy and corresponding neural activity for different kinds of source details; namely, those that have been encoded via a more or less effective strategy. Thus, it is not yet known whether encoding source details in a self-referential manner, a strategy suggested to promote successful memory in the young and old, may enhance source memory accuracy and reduce the commonly observed age-related changes in neural activity associated with source memory retrieval. Here, we investigated these issues by using event-related potentials (ERPs) to measure the effects of aging on the neural correlates of successful source memory retrieval ("old-new effects") for objects encoded either self-referentially or self-externally. Behavioral results showed that both young and older adults demonstrated better source memory accuracy for objects encoded self-referentially. ERP results showed that old-new effects onsetted earlier for self-referentially encoded items in both groups and that age-related differences in the onset latency of these effects were reduced for self-referentially, compared to self-externally, encoded items. These results suggest that the implementation of an effective encoding strategy, like self-referential processing, may lead to more efficient retrieval, which in turn may improve source memory accuracy in both young and older adults. Published by Elsevier B.V.

  11. Aging and memory as discrimination: Influences of encoding specificity, cue overload, and prior knowledge.

    PubMed

    Badham, Stephen P; Poirier, Marie; Gandhi, Navina; Hadjivassiliou, Anna; Maylor, Elizabeth A

    2016-11-01

    From the perspective of memory-as-discrimination, whether a cue leads to correct retrieval simultaneously depends on the cue's relationship to (a) the memory target and (b) the other retrieval candidates. A corollary of the view is that increasing encoding-retrieval match may only help memory if it improves the cue's capacity to discriminate the target from competitors. Here, age differences in this discrimination process were assessed by manipulating the overlap between cues present at encoding and retrieval orthogonally with cue-target distinctiveness. In Experiment 1, associative memory differences for cue-target sets between young and older adults were minimized through training and retrieval efficiency was assessed through response time. In Experiment 2, age-group differences in associative memory were left to vary and retrieval efficiency was assessed through accuracy. Both experiments showed age-invariance in memory-as-discrimination: cues increasing encoding-retrieval match did not benefit memory unless they also improved discrimination between the target and competitors. Predictions based on the age-related associative deficit were also supported: prior knowledge alleviated age-related associative deficits (Experiment 1), and increasing encoding-retrieval match benefited older more than young adults (Experiment 2). We suggest that the latter occurred because older adults' associative memory deficits reduced the impact of competing retrieval candidates-hence the age-related benefit was not attributable to encoding-retrieval match per se, but rather it was a joint function of an increased probability of the cue connecting to the target combined with a decrease in competing retrieval candidates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Multipulse addressing of a Raman quantum memory: configurable beam splitting and efficient readout.

    PubMed

    Reim, K F; Nunn, J; Jin, X-M; Michelberger, P S; Champion, T F M; England, D G; Lee, K C; Kolthammer, W S; Langford, N K; Walmsley, I A

    2012-06-29

    Quantum memories are vital to the scalability of photonic quantum information processing (PQIP), since the storage of photons enables repeat-until-success strategies. On the other hand, the key element of all PQIP architectures is the beam splitter, which allows us to coherently couple optical modes. Here, we show how to combine these crucial functionalities by addressing a Raman quantum memory with multiple control pulses. The result is a coherent optical storage device with an extremely large time bandwidth product, that functions as an array of dynamically configurable beam splitters, and that can be read out with arbitrarily high efficiency. Networks of such devices would allow fully scalable PQIP, with applications in quantum computation, long distance quantum communications and quantum metrology.

  13. Three-dimensional optical memory systems based on photochromic materials: polarization control of two-color data writing and the possibility of nondestructive data reading

    NASA Astrophysics Data System (ADS)

    Akimov, D. A.; Fedotov, Andrei B.; Koroteev, Nikolai I.; Magnitskii, S. A.; Naumov, A. N.; Sidorov-Biryukov, Dmitri A.; Sokoluk, N. T.; Zheltikov, Alexei M.

    1998-04-01

    The possibilities of optimizing data writing and reading in devices of 3D optical memory using photochromic materials are discussed. We quantitatively analyze linear and nonlinear optical properties of induline spiropyran molecules, which allows us to estimate the efficiency of using such materials for implementing 3D optical-memory devices. It is demonstrated that, with an appropriate choice of polarization vectors of laser beams, one can considerably improve the efficiency of two-photon writing in photochromic materials. The problem of reading the data stored in a photochromic material is analyzed. The possibilities of data reading methods with the use of fluorescence and four-photon techniques are compared.

  14. Sex, estradiol, and spatial memory in a food-caching corvid.

    PubMed

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. SEX, ESTRADIOL, AND SPATIAL MEMORY IN A FOOD-CACHING CORVID

    PubMed Central

    Rensel, Michelle A.; Ellis, Jesse M.S.; Harvey, Brigit; Schlinger, Barney A.

    2015-01-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. PMID:26232613

  16. Influence of personality and neuropsychological ability on social functioning and self-management in bipolar disorder.

    PubMed

    Vierck, Esther; Joyce, Peter R

    2015-10-30

    A majority of bipolar patients (BD) show functional difficulties even in remission. In recent years cognitive functions and personality characteristics have been associated with occupational and psychosocial outcomes, but findings are not consistent. We assessed personality and cognitive functioning through a range of tests in BD and control participants. Three cognitive domains-verbal memory, facial-executive, and spatial memory-were extracted by principal component analysis. These factors and selected personality dimensions were included in hierarchical regression analysis to predict psychosocial functioning and the use of self-management strategies while controlling for mood status. The best determinants of good psychosocial functioning were good verbal memory and high self-directedness. The use of self-management techniques was associated with a low level of harm-avoidance. Our findings indicate that strategies to improve memory and self-directedness may be useful for increasing functioning in individuals with bipolar disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    DOE PAGES

    Hargrove, Paul H.; Duell, Jason C.

    2006-09-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less

  18. 76 FR 24409 - Proposed Amendment of Class E Airspace; Ava, MO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ...) at Bill Martin Memorial Airport, Ava, MO, has made this action necessary for the safety and management of Instrument Flight Rules (IFR) operations at Bill Martin Memorial Airport. DATES: Comments must... from 700 feet above the surface for standard instrument approach procedures at Bill Martin Memorial...

  19. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  20. A highly efficient silole-containing dithienylethene with excellent thermal stability and fatigue resistance: a promising candidate for optical memory storage materials.

    PubMed

    Chan, Jacky Chi-Hung; Lam, Wai Han; Yam, Vivian Wing-Wah

    2014-12-10

    Diarylethene compounds are potential candidates for applications in optical memory storage systems and photoswitchable molecular devices; however, they usually show low photocycloreversion quantum yields, which result in ineffective erasure processes. Here, we present the first highly efficient photochromic silole-containing dithienylethene with excellent thermal stability and fatigue resistance. The photochemical quantum yields for photocyclization and photocycloreversion of the compound are found to be high and comparable to each other; the latter of which is rarely found in diarylethene compounds. These would give rise to highly efficient photoswitchable material with effective writing and erasure processes. Incorporation of the silole moiety as a photochromic dithienylethene backbone also was demonstrated to enhance the thermal stability of the closed form, in which the thermal backward reaction to the open form was found to be negligible even at 100 °C, which leads to a promising candidate for use as photoswitchable materials and optical memory storage.

  1. Rambrain - a library for virtually extending physical memory

    NASA Astrophysics Data System (ADS)

    Imgrund, Maximilian; Arth, Alexander

    2017-08-01

    We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.

  2. Comparison of memory and meta-memory abilities of children with cochlear implant and normal hearing peers.

    PubMed

    Engel-Yeger, Batya; Durr, Doris H; Josman, Naomi

    2011-01-01

    This study aimed (1) to compare visual memory and meta-memory abilities, including the use of strategies as context, of children with cochlear implant (CI) and children with normal hearing; (2) to examine the concurrent and construct validity of 'The Contextual Memory Test for Children' (CMT-CH). Twenty children with CI and 20 children with normal hearing, aged 8-10 years, participated in this study. Memory abilities were measured by two subtests of the Children's Memory Scale (CMS) and by CMT-CH, which also measures meta-memory abilities. Children with CI scored significantly lower in both tests of memory and meta-memory and showed less efficient use of context to memorise. Significant positive correlations were found between CMS and CMT-CH memory tests in both groups. Visual memory and meta-memory abilities may be impaired in children with CI. Evaluation and intervention for children with CI should refer to their memory and meta-memory abilities in order to measure the outcomes of CIs, and enhance language development academic achievements. Although more studies on CMT-CH should be performed, the CMT-CH may be used for the evaluation of visual memory of children with CI.

  3. Nanophotonic rare-earth quantum memory with optically controlled retrieval

    NASA Astrophysics Data System (ADS)

    Zhong, Tian; Kindem, Jonathan M.; Bartholomew, John G.; Rochman, Jake; Craiciu, Ioana; Miyazono, Evan; Bettinelli, Marco; Cavalli, Enrico; Verma, Varun; Nam, Sae Woo; Marsili, Francesco; Shaw, Matthew D.; Beyer, Andrew D.; Faraon, Andrei

    2017-09-01

    Optical quantum memories are essential elements in quantum networks for long-distance distribution of quantum entanglement. Scalable development of quantum network nodes requires on-chip qubit storage functionality with control of the readout time. We demonstrate a high-fidelity nanophotonic quantum memory based on a mesoscopic neodymium ensemble coupled to a photonic crystal cavity. The nanocavity enables >95% spin polarization for efficient initialization of the atomic frequency comb memory and time bin-selective readout through an enhanced optical Stark shift of the comb frequencies. Our solid-state memory is integrable with other chip-scale photon source and detector devices for multiplexed quantum and classical information processing at the network nodes.

  4. Design and DSP implementation of star image acquisition and star point fast acquiring and tracking

    NASA Astrophysics Data System (ADS)

    Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang

    2006-02-01

    Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.

  5. Coherence time of over a second in a telecom-compatible quantum memory storage material

    NASA Astrophysics Data System (ADS)

    Rančić, Miloš; Hedges, Morgan P.; Ahlefeldt, Rose L.; Sellars, Matthew J.

    2018-01-01

    Quantum memories for light will be essential elements in future long-range quantum communication networks. These memories operate by reversibly mapping the quantum state of light onto the quantum transitions of a material system. For networks, the quantum coherence times of these transitions must be long compared to the network transmission times, approximately 100 ms for a global communication network. Due to a lack of a suitable storage material, a quantum memory that operates in the 1,550 nm optical fibre communication band with a storage time greater than 1 μs has not been demonstrated. Here we describe the spin dynamics of 167Er3+: Y2SiO5 in a high magnetic field and demonstrate that this material has the characteristics for a practical quantum memory in the 1,550 nm communication band. We observe a hyperfine coherence time of 1.3 s. We also demonstrate efficient spin pumping of the entire ensemble into a single hyperfine state, a requirement for broadband spin-wave storage. With an absorption of 70 dB cm-1 at 1,538 nm and Λ transitions enabling spin-wave storage, this material is the first candidate identified for an efficient, broadband quantum memory at telecommunication wavelengths.

  6. Formal verification of a set of memory management units

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.

    1992-01-01

    This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.

  7. Optical mass memory system (AMM-13). AMM/DBMS interface control document

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.

  8. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  9. Rehabilitation of executive dysfunction following brain injury: "content-free" cueing improves everyday prospective memory performance.

    PubMed

    Fish, Jessica; Evans, Jonathan J; Nimmo, Morag; Martin, Emma; Kersel, Denyse; Bateman, Andrew; Wilson, Barbara A; Manly, Tom

    2007-03-25

    Prospective memory (PM) is often claimed to rely upon executive as well as mnemonic resources. Here, we examined the contribution of executive functions towards PM by providing intermittent support for monitoring processes using "content-free" cues, which carried no direct information regarding the PM task itself. Twenty participants with non-progressive brain injury and PM difficulties received brief training in linking a cue phrase "STOP!" with pausing current activity and reviewing stored goals. The efficacy of this strategy was examined with a PM task requiring participants to make telephone calls to a voicemail service at four set times each day for 10 days. Task content was encoded using errorless learning to minimise retrospective memory-based failures. On five randomly selected days, eight text messages reading simply "STOP!" were sent to participants' mobile telephones, but crucially not within an hour of a target time. Striking improvements in performance were observed on cued days, thus demonstrating a within-subjects experimental modulation of PM performance using cues that carry no information other than by association with participants' stored memory of their intentions. In addition to the theoretical insights, the time course over which the effect was observed constitutes encouraging evidence that such strategies are useful in helping to remediate some negative consequences of executive dysfunction. It is proposed that this benefit results from enhanced efficiency of goal management via increased monitoring of current and future goals, and the steps necessary to achieve them, perhaps compensating for under-functioning fronto-parietal attention systems.

  10. Prospective memory performance in individuals with Parkinson's disease who have mild cognitive impairment.

    PubMed

    Costa, Alberto; Peppe, Antonella; Zabberoni, Silvia; Serafini, Francesca; Barban, Francesco; Scalici, Francesco; Caltagirone, Carlo; Carlesimo, Giovanni Augusto

    2015-09-01

    Prospective memory (PM) is the ability to keep in memory and realize future intentions. We aimed at investigating whether in Parkinson's disease (PD) PM deficits are related to mild cognitive impairment (MCI). Other aims were to investigate the cognitive abilities underlying PM performance, and the association between PM performance and measures of daily living functioning. The study included 15 PD patients with single domain MCI, 15 with multiple domain MCI, 17 PD patients without cognitive disorders (PDNC) and 25 healthy controls (HCs). All subjects were administered a PM procedure that included focal (PM cue is processed in the ongoing task) and nonfocal (PM cue is not processed in the ongoing task) conditions. PD patients were administered an extensive neuropsychological battery and scales to assess daily living abilities. PD patients with MCI (both single and multiple domains) showed lower accuracy on all PM conditions than both HC and PDNC patients. This was predicted by their scores on shifting indices. Conversely, PM accuracy of PDNC patients was comparable to HCs. Regression analyses revealed that PD patients' PM performance significantly predicted scores on daily living scales Conclusions: Results suggest that PM efficiency is not tout-court reduced in PD patients, but it specifically depends on the presence of MCI. Moreover, decreased executive functioning, but not episodic memory failure, accounts for a significant proportion of variance in PM performance. Finally, PM accuracy indices were found to be associated with measures of global daily living functioning and management of medication. (c) 2015 APA, all rights reserved).

  11. Architecture of security management unit for safe hosting of multiple agents

    NASA Astrophysics Data System (ADS)

    Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques

    1999-04-01

    In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.

  12. Cognitive correlates of verbal memory and verbal fluency in schizophrenia, and differential effects of various clinical symptoms between male and female patients.

    PubMed

    Brébion, Gildas; Villalta-Gil, Victoria; Autonell, Jaume; Cervilla, Jorge; Dolz, Montserrat; Foix, Alexandrina; Haro, Josep Maria; Usall, Judith; Vilaplana, Miriam; Ochoa, Susana

    2013-06-01

    Impairment of higher cognitive functions in patients with schizophrenia might stem from perturbation of more basic functions, such as processing speed. Various clinical symptoms might affect cognitive efficiency as well. Notably, previous research has revealed the role of affective symptoms on memory performance in this population, and suggested sex-specific effects. We conducted a post-hoc analysis of an extensive neuropsychological study of 88 patients with schizophrenia. Regression analyses were conducted on verbal memory and verbal fluency data to investigate the contribution of semantic organisation and processing speed to performance. The role of negative and affective symptoms and of attention disorders in verbal memory and verbal fluency was investigated separately in male and female patients. Semantic clustering contributed to verbal recall, and a measure of reading speed contributed to verbal recall as well as to phonological and semantic fluency. Negative symptoms affected verbal recall and verbal fluency in the male patients, whereas attention disorders affected these abilities in the female patients. Furthermore, depression affected verbal recall in women, whereas anxiety affected it in men. These results confirm the association of processing speed with cognitive efficiency in patients with schizophrenia. They also confirm the previously observed sex-specific associations of depression and anxiety with memory performance in these patients, and suggest that negative symptoms and attention disorders likewise are related to cognitive efficiency differently in men and women. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  14. Exploring heterogeneous market hypothesis using realized volatility

    NASA Astrophysics Data System (ADS)

    Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari

    2013-04-01

    This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.

  15. Optimizing SIEM Throughput on the Cloud Using Parallelization.

    PubMed

    Alam, Masoom; Ihsan, Asif; Khan, Muazzam A; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, Muhammad Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.

  16. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  17. Memory-related brain lateralisation in birds and humans.

    PubMed

    Moorman, Sanne; Nicol, Alister U

    2015-03-01

    Visual imprinting in chicks and song learning in songbirds are prominent model systems for the study of the neural mechanisms of memory. In both systems, neural lateralisation has been found to be involved in memory formation. Although many processes in the human brain are lateralised--spatial memory and musical processing involves mostly right hemisphere dominance, whilst language is mostly left hemisphere dominant--it is unclear what the function of lateralisation is. It might enhance brain capacity, make processing more efficient, or prevent occurrence of conflicting signals. In both avian paradigms we find memory-related lateralisation. We will discuss avian lateralisation findings and propose that birds provide a strong model for studying neural mechanisms of memory-related lateralisation. Copyright © 2014. Published by Elsevier Ltd.

  18. LOD-based clustering techniques for efficient large-scale terrain storage and visualization

    NASA Astrophysics Data System (ADS)

    Bao, Xiaohong; Pajarola, Renato

    2003-05-01

    Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.

  19. Intelligence and working memory systems: evidence of neural efficiency in alpha band ERD.

    PubMed

    Grabner, R H; Fink, A; Stipacek, A; Neuper, C; Neubauer, A C

    2004-07-01

    Starting from the well-established finding that brighter individuals display a more efficient brain function when performing cognitive tasks (i.e., neural efficiency), we investigated the relationship between intelligence and cortical activation in the context of working memory (WM) tasks. Fifty-five male (n=28) and female (n=27) participants worked on (1) a classical forward digit span task demanding only short-term memory (STM), (2) an attention-switching task drawing on the central executive (CE) of WM and (3) a WM task involving both STM storage and CE processes. During performance of these three types of tasks, cortical activation was quantified by the extent of Event-Related Desynchronization (ERD) in the alpha band of the human EEG. Correlational analyses revealed associations between the amount of ERD in the upper alpha band and intelligence in several brain regions. In all tasks, the males were more likely to display the negative intelligence-cortical activation relationship. Furthermore, stronger associations between ERD and intelligence were found for fluid rather than crystallized intelligence. Analyses also point to topographical differences in neural efficiency depending on sex, task type and the associated cognitive subsystems engaged during task performance.

  20. Contextual knowledge reduces demands on working memory during reading.

    PubMed

    Miller, Lisa M Soederberg; Cohen, Jason A; Wingfield, Arthur

    2006-09-01

    An experiment is reported in which young, middle-aged, and older adults read and recalled ambiguous texts either with or without the topic title that supplied contextual knowledge. Within each of the age groups, the participants were divided into those with high or low working memory (WM) spans, with available WM capacity further manipulated by the presence or absence of an auditory target detection task concurrent with the reading task. Differences in reading efficiency (reading time per proposition recalled) between low WM span and high WM span groups were greater among readers who had access to contextual knowledge relative to those who did not, suggesting that contextual knowledge reduces demands on WM capacity. This position was further supported by the finding that increased age and attentional demands, two factors associated with reduced WM capacity, exaggerated the benefits of contextual knowledge on reading efficiency. The relative strengths of additional potential predictors of reading efficiency (e.g., interest, effort, and memory beliefs), along with knowledge, WM span, and age, are reported. Findings showed that contextual knowledge was the strongest predictor of reading efficiency even after controlling for the effects of all of the other predictors.

  1. Strategic search from long-term memory: an examination of semantic and autobiographical recall.

    PubMed

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2014-01-01

    Searching long-term memory is theoretically driven by both directed (search strategies) and random components. In the current study we conducted four experiments evaluating strategic search in semantic and autobiographical memory. Participants were required to generate either exemplars from the category of animals or the names of their friends for several minutes. Self-reported strategies suggested that participants typically relied on visualization strategies for both tasks and were less likely to rely on ordered strategies (e.g., alphabetic search). When participants were instructed to use particular strategies, the visualization strategy resulted in the highest levels of performance and the most efficient search, whereas ordered strategies resulted in the lowest levels of performance and fairly inefficient search. These results are consistent with the notion that retrieval from long-term memory is driven, in part, by search strategies employed by the individual, and that one particularly efficient strategy is to visualize various situational contexts that one has experienced in the past in order to constrain the search and generate the desired information.

  2. Working memory capacity and recall from long-term memory: Examining the influences of encoding strategies, study time allocation, search efficiency, and monitoring abilities.

    PubMed

    Unsworth, Nash

    2016-01-01

    The relation between working memory capacity (WMC) and recall from long-term memory (LTM) was examined in the current study. Participants performed multiple measures of delayed free recall varying in presentation duration and self-reported their strategy usage after each task. Participants also performed multiple measures of WMC. The results suggested that WMC and LTM recall were related, and part of this relation was due to effective strategy use. However, adaptive changes in strategy use and study time allocation were not related to WMC. Examining multiple variables with structural equation modeling suggested that the relation between WMC and LTM recall was due to variation in effective strategy use, search efficiency, and monitoring abilities. Furthermore, all variables were shown to account for individual differences in LTM recall. These results suggest that the relation between WMC and recall from LTM is due to multiple strategic factors operating at both encoding and retrieval. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Intelligence as the efficiency of cue-driven retrieval from secondary memory.

    PubMed

    Liesefeld, Heinrich René; Hoffmann, Eugenia; Wentura, Dirk

    2016-01-01

    Complex-span (working-memory-capacity) tasks are among the most successful predictors of intelligence. One important contributor to this relationship is the ability to efficiently employ cues for the retrieval from secondary memory. Presumably, intelligent individuals can considerably restrict their memory search sets by using such cues and can thereby improve recall performance. We here test this assumption by experimentally manipulating the validity of retrieval cues. When memoranda are drawn from the same semantic category on two successive trials of a verbal complex-span task, the category is a very strong retrieval cue on its first occurrence (strong-cue trial) but loses some of its validity on its second occurrence (weak-cue trial). If intelligent individuals make better use of semantic categories as retrieval cues, their recall accuracy suffers more from this loss of cue validity. Accordingly, our results show that less variance in intelligence is explained by recall accuracy on weak-cue compared with strong-cue trials.

  4. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  5. Neuroanatomical and Cognitive Mediators of Age-Related Differences in Episodic Memory

    PubMed Central

    Head, Denise; Rodrigue, Karen M.; Kennedy, Kristen M.; Raz, Naftali

    2009-01-01

    Aging is associated with declines in episodic memory. In this study, the authors used a path analysis framework to explore the mediating role of differences in brain structure, executive functions, and processing speed in age-related differences in episodic memory. Measures of regional brain volume (prefrontal gray and white matter, caudate, hippocampus, visual cortex), executive functions (working memory, inhibitory control, task switching, temporal processing), processing speed, and episodic memory were obtained in a sample of young and older adults. As expected, age was linked to reduction in regional brain volumes and cognitive performance. Moreover, neural and cognitive factors completely mediated age differences in episodic memory. Whereas hippocampal shrinkage directly affected episodic memory, prefrontal volumetric reductions influenced episodic memory via limitations in working memory and inhibitory control. Age-related slowing predicted reduced efficiency in temporal processing, working memory, and inhibitory control. Lastly, poorer temporal processing directly affected episodic memory. No direct effects of age on episodic memory remained once these factors were taken into account. These analyses highlight the value of a multivariate approach with the understanding of complex relationships in cognitive and brain aging. PMID:18590361

  6. A novel binary shape context for 3D local surface description

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Li, Bijun; Zang, Yufu

    2017-08-01

    3D local surface description is now at the core of many computer vision technologies, such as 3D object recognition, intelligent driving, and 3D model reconstruction. However, most of the existing 3D feature descriptors still suffer from low descriptiveness, weak robustness, and inefficiency in both time and memory. To overcome these challenges, this paper presents a robust and descriptive 3D Binary Shape Context (BSC) descriptor with high efficiency in both time and memory. First, a novel BSC descriptor is generated for 3D local surface description, and the performance of the BSC descriptor under different settings of its parameters is analyzed. Next, the descriptiveness, robustness, and efficiency in both time and memory of the BSC descriptor are evaluated and compared to those of several state-of-the-art 3D feature descriptors. Finally, the performance of the BSC descriptor for 3D object recognition is also evaluated on a number of popular benchmark datasets, and an urban-scene dataset is collected by a terrestrial laser scanner system. Comprehensive experiments demonstrate that the proposed BSC descriptor obtained high descriptiveness, strong robustness, and high efficiency in both time and memory and achieved high recognition rates of 94.8%, 94.1% and 82.1% on the considered UWA, Queen, and WHU datasets, respectively.

  7. Memory protection

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.

  8. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  9. Disrupted rapid eye movement sleep predicts poor declarative memory performance in post-traumatic stress disorder.

    PubMed

    Lipinska, Malgorzata; Timol, Ridwana; Kaminer, Debra; Thomas, Kevin G F

    2014-06-01

    Successful memory consolidation during sleep depends on healthy slow-wave and rapid eye movement sleep, and on successful transition across sleep stages. In post-traumatic stress disorder, sleep is disrupted and memory is impaired, but relations between these two variables in the psychiatric condition remain unexplored. We examined whether disrupted sleep, and consequent disrupted memory consolidation, is a mechanism underlying declarative memory deficits in post-traumatic stress disorder. We recruited three matched groups of participants: post-traumatic stress disorder (n = 16); trauma-exposed non-post-traumatic stress disorder (n = 15); and healthy control (n = 14). They completed memory tasks before and after 8 h of sleep. We measured sleep variables using sleep-adapted electroencephalography. Post-traumatic stress disorder-diagnosed participants experienced significantly less sleep efficiency and rapid eye movement sleep percentage, and experienced more awakenings and wake percentage in the second half of the night than did participants in the other two groups. After sleep, post-traumatic stress disorder-diagnosed participants retained significantly less information on a declarative memory task than controls. Rapid eye movement percentage, wake percentage and sleep efficiency correlated with retention of information over the night. Furthermore, lower rapid eye movement percentage predicted poorer retention in post-traumatic stress disorder-diagnosed individuals. Our results suggest that declarative memory consolidation is disrupted during sleep in post-traumatic stress disorder. These data are consistent with theories suggesting that sleep benefits memory consolidation via predictable neurobiological mechanisms, and that rapid eye movement disruption is more than a symptom of post-traumatic stress disorder. © 2014 European Sleep Research Society.

  10. Elevated stress is associated with prefrontal cortex dysfunction during a verbal memory task in women with HIV

    PubMed Central

    Rubin, Leah H.; Wu, Minjie; Sundermann, Erin E.; Meyer, Vanessa J.; Smith, Rachael; Weber, Kathleen M.; Cohen, Mardge H.; Little, Deborah M.; Maki, Pauline M.

    2016-01-01

    HIV-infected women may be particularly vulnerable to verbal learning and memory deficits. One factor contributing to these deficits is high perceived stress, which is associated with prefrontal cortical (PFC) atrophy and memory outcomes sensitive to PFC function, including retrieval and semantic clustering. We examined the association between stress and PFC activation during a verbal memory task in 36 HIV-infected women from the Chicago Consortium of the Women’s Interagency HIV Study (WIHS) to better understand the role of the PFC in this stress-related impairment. Participants completed standardized measures of verbal learning and memory and stress (Perceived Stress Scale-10). We used functional magnetic resonance imaging to assess brain function while participants completed encoding and recognition phases of a verbal memory task. HIV-infected women with higher stress (scores in top tertile) performed worse on all verbal memory outcomes including strategic encoding (p’s<0.05) compared to HIV-infected women with lower stress (scores in lower two tertiles). Patterns of brain activation during recognition (but not encoding) differed between women with higher versus lower stress. During recognition, women with higher stress demonstrated greater deactivation in medial PFC and posterior cingulate cortex compared to women with lower stress (p’s<0.05). Greater deactivation in medial PFC marginally related to less efficient strategic retrieval (p=0.06). Similar results were found in analyses focusing on PTSD symptoms. Results suggest that stress might alter the function of the medial PFC in HIV-infected women resulting in less efficient strategic retrieval and deficits in verbal memory. PMID:27094924

  11. Elevated stress is associated with prefrontal cortex dysfunction during a verbal memory task in women with HIV.

    PubMed

    Rubin, Leah H; Wu, Minjie; Sundermann, Erin E; Meyer, Vanessa J; Smith, Rachael; Weber, Kathleen M; Cohen, Mardge H; Little, Deborah M; Maki, Pauline M

    2016-12-01

    HIV-infected women may be particularly vulnerable to verbal learning and memory deficits. One factor contributing to these deficits is high perceived stress, which is associated with prefrontal cortical (PFC) atrophy and memory outcomes sensitive to PFC function, including retrieval and semantic clustering. We examined the association between stress and PFC activation during a verbal memory task in 36 HIV-infected women from the Chicago Consortium of the Women's Interagency HIV Study (WIHS) to better understand the role of the PFC in this stress-related impairment. Participants completed standardized measures of verbal learning and memory and stress (perceived stress scale-10). We used functional magnetic resonance imaging to assess brain function while participants completed encoding and recognition phases of a verbal memory task. HIV-infected women with higher stress (scores in top tertile) performed worse on all verbal memory outcomes including strategic encoding (p < 0.05) compared to HIV-infected women with lower stress (scores in lower two tertiles). Patterns of brain activation during recognition (but not encoding) differed between women with higher vs. lower stress. During recognition, women with higher stress demonstrated greater deactivation in medial PFC and posterior cingulate cortex compared to women with lower stress (p < 0.05). Greater deactivation in medial PFC marginally related to less efficient strategic retrieval (p = 0.06). Similar results were found in analyses focusing on PTSD symptoms. Results suggest that stress might alter the function of the medial PFC in HIV-infected women resulting in less efficient strategic retrieval and deficits in verbal memory.

  12. Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sczyrba, Alex; Pratap, Abhishek; Canon, Shane

    2011-03-22

    Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less

  13. Combined Cognitive Training vs. Memory Strategy Training in Healthy Older Adults.

    PubMed

    Li, Bing; Zhu, Xinyi; Hou, Jianhua; Chen, Tingji; Wang, Pengyun; Li, Juan

    2016-01-01

    As mnemonic utilization deficit in older adults associates with age-related decline in executive function, we hypothesized that memory strategy training combined with executive function training might induce larger training effect in memory and broader training effects in non-memory outcomes than pure memory training. The present study compared the effects of combined cognitive training (executive function training plus memory strategy training) to pure memory strategy training. Forty healthy older adults were randomly assigned to a combined cognitive training group or a memory strategy training group. A control group receiving no training was also included. Combined cognitive training group received 16 sessions of training (eight sessions of executive function training followed by eight sessions of memory strategy training). Memory training group received 16 sessions of memory strategy training. The results partly supported our hypothesis in that indeed improved performance on executive function was only found in combined training group, whereas memory performance increased less in combined training compared to memory strategy group. Results suggest that combined cognitive training may be less efficient than pure memory training in memory outcomes, though the influences from insufficient training time and less closeness between trained executive function and working memory could not be excluded; however it has broader training effects in non-memory outcomes. www.chictr.org.cn, identifier ChiCTR-OON-16007793.

  14. Energy-efficient writing scheme for magnetic domain-wall motion memory

    NASA Astrophysics Data System (ADS)

    Kim, Kab-Jin; Yoshimura, Yoko; Ham, Woo Seung; Ernst, Rick; Hirata, Yuushou; Li, Tian; Kim, Sanghoon; Moriyama, Takahiro; Nakatani, Yoshinobu; Ono, Teruo

    2017-04-01

    We present an energy-efficient magnetic domain-writing scheme for domain wall (DW) motion-based memory devices. A cross-shaped nanowire is employed to inject a domain into the nanowire through current-induced DW propagation. The energy required for injecting the magnetic domain is more than one order of magnitude lower than that for the conventional field-based writing scheme. The proposed scheme is beneficial for device miniaturization because the threshold current for DW propagation scales with the device size, which cannot be achieved in the conventional field-based technique.

  15. Compiling for Application Specific Computational Acceleration in Reconfigurable Architectures Final Report CRADA No. TSB-2033-01

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Supinski, B.; Caliga, D.

    2017-09-28

    The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.

  16. The Focus of Attention in Visual Working Memory: Protection of Focused Representations and Its Individual Variation.

    PubMed

    Heuer, Anna; Schubö, Anna

    2016-01-01

    Visual working memory can be modulated according to changes in the cued task relevance of maintained items. Here, we investigated the mechanisms underlying this modulation. In particular, we studied the consequences of attentional selection for selected and unselected items, and the role of individual differences in the efficiency with which attention is deployed. To this end, performance in a visual working memory task as well as the CDA/SPCN and the N2pc, ERP components associated with visual working memory and attentional processes, were analysed. Selection during the maintenance stage was manipulated by means of two successively presented retrocues providing spatial information as to which items were most likely to be tested. Results show that attentional selection serves to robustly protect relevant representations in the focus of attention while unselected representations which may become relevant again still remain available. Individuals with larger retrocueing benefits showed higher efficiency of attentional selection, as indicated by the N2pc, and showed stronger maintenance-associated activity (CDA/SPCN). The findings add to converging evidence that focused representations are protected, and highlight the flexibility of visual working memory, in which information can be weighted according its relevance.

  17. Efficient Bayesian inference for natural time series using ARFIMA processes

    NASA Astrophysics Data System (ADS)

    Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.

    2015-11-01

    Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.

  18. Spin-transfer torque switched magnetic tunnel junctions in magnetic random access memory

    NASA Astrophysics Data System (ADS)

    Sun, Jonathan Z.

    2016-10-01

    Spin-transfer torque (or spin-torque, or STT) based magnetic tunnel junction (MTJ) is at the heart of a new generation of magnetism-based solid-state memory, the so-called spin-transfer-torque magnetic random access memory, or STT-MRAM. Over the past decades, STT-based switchable magnetic tunnel junction has seen progress on many fronts, including the discovery of (001) MgO as the most favored tunnel barrier, which together with (bcc) Fe or FeCo alloy are yielding best demonstrated tunnel magneto-resistance (TMR); the development of perpendicularly magnetized ultrathin CoFeB-type of thin films sufficient to support high density memories with junction sizes demonstrated down to 11nm in diameter; and record-low spin-torque switching threshold current, giving best reported switching efficiency over 5 kBT/μA. Here we review the basic device properties focusing on the perpendicularly magnetized MTJs, both in terms of switching efficiency as measured by sub-threshold, quasi-static methods, and of switching speed at super-threshold, forced switching. We focus on device behaviors important for memory applications that are rooted in fundamental device physics, which highlights the trade-off of device parameters for best suitable system integration.

  19. How Managers' everyday decisions create or destroy your company's strategy.

    PubMed

    Bower, Joseph L; Gilbert, Clark G

    2007-02-01

    Senior executives have long been frustrated by the disconnection between the plans and strategies they devise and the actual behavior of the managers throughout the company. This article approaches the problem from the ground up, recognizing that every time a manager allocates resources, that decision moves the company either into or out of alignment with its announced strategy. A well-known story--Intel's exit from the memory business--illustrates this point. When discussing what businesses Intel should be in, Andy Grove asked Gordon Moore what they would do if Intel were a company that they had just acquired. When Moore answered, "Get out of memory," they decided to do just that. It turned out, though, that Intel's revenues from memory were by this time only 4% of total sales. Intel's lower-level managers had already exited the business. What Intel hadn't done was to shut down the flow of research funding into memory (which was still eating up one-third of all research expenditures); nor had the company announced its exit to the outside world. Because divisional and operating managers-as well as customers and capital markets-have such a powerful impact on the realized strategy of the firm, senior management might consider focusing less on the company's formal strategy and more on the processes by which the company allocates resources. Top managers must know the track record of the people who are making resource allocation proposals; recognize the strategic issues at stake; reach down to operational managers to work across division lines; frame resource questions to reflect the corporate perspective, especially when large sums of money are involved and conditions are highly uncertain; and create a new context that allows top executives to circumvent the regular resource allocation process when necessary.

  20. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, Kamesh

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less

  1. Neural Bases of Automaticity

    ERIC Educational Resources Information Center

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.

    2018-01-01

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  2. Optimal Recall from Bounded Metaplastic Synapses: Predicting Functional Adaptations in Hippocampal Area CA3

    PubMed Central

    Savin, Cristina; Dayan, Peter; Lengyel, Máté

    2014-01-01

    A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments. PMID:24586137

  3. Evaluating architecture impact on system energy efficiency

    PubMed Central

    Yu, Shijie; Wang, Rui; Luan, Zhongzhi; Qian, Depei

    2017-01-01

    As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget. PMID:29161317

  4. Evaluating architecture impact on system energy efficiency.

    PubMed

    Yu, Shijie; Yang, Hailong; Wang, Rui; Luan, Zhongzhi; Qian, Depei

    2017-01-01

    As the energy consumption has been surging in an unsustainable way, it is important to understand the impact of existing architecture designs from energy efficiency perspective, which is especially valuable for High Performance Computing (HPC) and datacenter environment hosting tens of thousands of servers. One obstacle hindering the advance of comprehensive evaluation on energy efficiency is the deficient power measuring approach. Most of the energy study relies on either external power meters or power models, both of these two methods contain intrinsic drawbacks in their practical adoption and measuring accuracy. Fortunately, the advent of Intel Running Average Power Limit (RAPL) interfaces has promoted the power measurement ability into next level, with higher accuracy and finer time resolution. Therefore, we argue it is the exact time to conduct an in-depth evaluation of the existing architecture designs to understand their impact on system energy efficiency. In this paper, we leverage representative benchmark suites including serial and parallel workloads from diverse domains to evaluate the architecture features such as Non Uniform Memory Access (NUMA), Simultaneous Multithreading (SMT) and Turbo Boost. The energy is tracked at subcomponent level such as Central Processing Unit (CPU) cores, uncore components and Dynamic Random-Access Memory (DRAM) through exploiting the power measurement ability exposed by RAPL. The experiments reveal non-intuitive results: 1) the mismatch between local compute and remote memory node caused by NUMA effect not only generates dramatic power and energy surge but also deteriorates the energy efficiency significantly; 2) for multithreaded application such as the Princeton Application Repository for Shared-Memory Computers (PARSEC), most of the workloads benefit a notable increase of energy efficiency using SMT, with more than 40% decline in average power consumption; 3) Turbo Boost is effective to accelerate the workload execution and further preserve the energy, however it may not be applicable on system with tight power budget.

  5. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    PubMed

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  6. A review of emerging non-volatile memory (NVM) technologies and applications

    NASA Astrophysics Data System (ADS)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  7. An efficient spectral crystal plasticity solver for GPU architectures

    NASA Astrophysics Data System (ADS)

    Malahe, Michael

    2018-03-01

    We present a spectral crystal plasticity (CP) solver for graphics processing unit (GPU) architectures that achieves a tenfold increase in efficiency over prior GPU solvers. The approach makes use of a database containing a spectral decomposition of CP simulations performed using a conventional iterative solver over a parameter space of crystal orientations and applied velocity gradients. The key improvements in efficiency come from reducing global memory transactions, exposing more instruction-level parallelism, reducing integer instructions and performing fast range reductions on trigonometric arguments. The scheme also makes more efficient use of memory than prior work, allowing for larger problems to be solved on a single GPU. We illustrate these improvements with a simulation of 390 million crystal grains on a consumer-grade GPU, which executes at a rate of 2.72 s per strain step.

  8. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  9. Immune memory: the basics and how to trigger an efficient long-term immune memory.

    PubMed

    Beverley, P C L

    2010-01-01

    Immunological memory consists of expanded clones of T and B lymphocytes that show an increased rate of cell division and shortened telomeres compared with naïve cells. However, exhaustion of clones is delayed by kinetic heterogeneity within clones and altered survival and up-regulation of telomerase. Prolonged maintenance of protective B-cell immunity is T-cell dependent and requires a balance between plasma cells and memory B cells. Protective T-cell immunity also requires correct quality of T cells and that they are located appropriately. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Memory monitoring by animals and humans

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Shields, W. E.; Allendoerfer, K. R.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)

    1998-01-01

    The authors asked whether animals and humans would use similarly an uncertain response to escape indeterminate memories. Monkeys and humans performed serial probe recognition tasks that produced differential memory difficulty across serial positions (e.g., primacy and recency effects). Participants were given an escape option that let them avoid any trials they wished and receive a hint to the trial's answer. Across species, across tasks, and even across conspecifics with sharper or duller memories, monkeys and humans used the escape option selectively when more indeterminate memory traces were probed. Their pattern of escaping always mirrored the pattern of their primary memory performance across serial positions. Signal-detection analyses confirm the similarity of the animals' and humans' performances. Optimality analyses assess their efficiency. Several aspects of monkeys' performance suggest the cognitive sophistication of their decisions to escape.

  11. Practical Verification & Safeguard Tools for C/C++

    DTIC Science & Technology

    2007-11-01

    735; RDDC Valcartier; novembre 2007. Ce document est le rapport final d’un projet de recherche qui a été mené en 2005-2006. Le but de ce projet... 13 2.8 On Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.9 Memory Management Problems... 13 2.9.1 Use of Freed Memory . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.9.2 Underallocated Memory for a

  12. Microprogramming Handbook. Second Edition.

    ERIC Educational Resources Information Center

    Microdata Corp., Santa Ana, CA.

    Instead of instructions residing in the main memory as in a fixed instruction computer, a micro-programable computer has a separete read-only memory which is alterable so that the system can be efficiently adapted to the application at hand. Microprogramable computers are faster than fixed instruction computers for several reasons: instruction…

  13. Basic and Exceptional Calculation Abilities in a Calculating Prodigy: A Case Study.

    ERIC Educational Resources Information Center

    Pesenti, Mauro; Seron, Xavier; Samson, Dana; Duroux, Bruno

    1999-01-01

    Describes the basic and exceptional calculation abilities of a calculating prodigy whose performances were investigated in single- and multi-digit number multiplication, numerical comparison, raising of powers, and short-term memory tasks. Shows how his highly efficient long-term memory storage and retrieval processes, knowledge of calculation…

  14. Efficiency of the Prefrontal Cortex during Working Memory in Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Sheridan, Margaret A.; Hinshaw, Stephen; D'Esposito, Mark

    2007-01-01

    Objective: Previous research has demonstrated that during task conditions requiring an increase in inhibitory function or working memory, children and adults with attention-deficit/hyperactivity disorder (ADHD) exhibit greater and more varied prefrontal cortical(PFC) activation compared to age-matched control participants. This pattern may reflect…

  15. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  16. Research on memory management in embedded systems

    NASA Astrophysics Data System (ADS)

    Huang, Xian-ying; Yang, Wu

    2005-12-01

    Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.

  17. Preventing messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A; Chen, Dong; Gooding, Thomas; Heidelberger, Philip; Parker, Jeff

    2014-01-14

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate and interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  18. CoNNeCT Baseband Processor Module

    NASA Technical Reports Server (NTRS)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  19. Differential effects of ADORA2A gene variations in pre-attentive visual sensory memory subprocesses.

    PubMed

    Beste, Christian; Stock, Ann-Kathrin; Ness, Vanessa; Epplen, Jörg T; Arning, Larissa

    2012-08-01

    The ADORA2A gene encodes the adenosine A(2A) receptor that is highly expressed in the striatum where it plays a role in modulating glutamatergic and dopaminergic transmission. Glutamatergic signaling has been suggested to play a pivotal role in cognitive functions related to the pre-attentive processing of external stimuli. Yet, the precise molecular mechanism of these processes is poorly understood. Therefore, we aimed to investigate whether ADORA2A gene variation has modulating effects on visual pre-attentive sensory memory processing. Studying two polymorphisms, rs5751876 and rs2298383, in 199 healthy control subjects who performed a partial-report paradigm, we find that ADORA2A variation is associated with differences in the efficiency of pre-attentive sensory memory sub-processes. We show that especially the initial visual availability of stimulus information is rendered more efficiently in the homozygous rare genotype groups. Processes related to the transfer of information into working memory and the duration of visual sensory (iconic) memory are compromised in the homozygous rare genotype groups. Our results show a differential genotype-dependent modulation of pre-attentive sensory memory sub-processes. Hence, we assume that this modulation may be due to differential effects of increased adenosine A(2A) receptor signaling on glutamatergic transmission and striatal medium spiny neuron (MSN) interaction. Copyright © 2011 Elsevier B.V. and ECNP. All rights reserved.

  20. Optical tomographic memories: algorithms for the efficient information readout

    NASA Astrophysics Data System (ADS)

    Pantelic, Dejan V.

    1990-07-01

    Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different

  1. Working memory and the strategic control of attention in older and younger adults.

    PubMed

    Hayes, Melissa G; Kelly, Andrew J; Smith, Anderson D

    2013-03-01

    The objective of this study was to investigate the effects of aging on the strategic control of attention and the extent to which this relationship is mediated by working memory capacity (WMC). This study also sought to investigate boundary conditions wherein age differences in selectivity may occur. Across 2 studies, the value-directed remembering task used by Castel and colleagues (Castel, A. D., Balota, D. A., & McCabe, D. P. (2009). Memory efficiency and the strategic control of attention at encoding: Impairments of value-directed remembering in Alzheimer's Disease. Neuropsychology, 23, 297-306) was modified to include value-directed forgetting. Study 2 incorporated valence as an additional task demand, and age differences were predicted in both studies due to increased demands of controlled processing. Automated operation span and Stroop span were included as working memory measures, and working memory was predicted to mediate performance. Results confirmed these predictions, as older adults were less efficient in maximizing selectivity scores when high demands were placed on selectivity processes, and working memory was found to mediate performance on this task. When list length was increased from previous studies and participants were required to actively forget negative-value words, older adults were not able to selectively encode high-value information to the same degree as younger adults. Furthermore, WMC appears to support the ability to selectively encode information.

  2. Detailed Design and Implementation of a Multiprogramming Operating System for Sixteen-Bit Microprocessors.

    DTIC Science & Technology

    1983-12-01

    4 Multiuser Support ...... .......... 11-5 User Interface . .. .. ................ .. 11- 7 Inter -user Communications ................ 11- 7 Memory...user will greatly help facilitate the learning process. Inter -User Communication The inter -user communications of the operating system can be done using... inter -user communications would be met by using one or both of them. AMemory and File Management Memory and file management is concerned with four basic

  3. Advanced Development of Certified OS Kernels

    DTIC Science & Technology

    2015-06-01

    It provides an infrastructure to map a physical page into multiple processes’ page maps in different address spaces. Their ownership mechanism ensures...of their shared memory infrastructure . Trap module The trap module specifies the behaviors of exception handlers and mCertiKOS system calls. In...layers), 1 pm for the shared memory infrastructure (3 layers), 3.5 pm for the thread management (10 layers), 1 pm for the process management (4 layers

  4. Efficient frequent pattern mining algorithm based on node sets in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.

    2017-11-01

    The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.

  5. Spin-transfer-torque efficiency enhanced by edge-damage of perpendicular magnetic random access memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Kyungmi; Lee, Kyung-Jin, E-mail: kj-lee@korea.ac.kr; Department of Materials Science and Engineering, Korea University, Seoul 136-713

    2015-08-07

    We numerically investigate the effect of magnetic and electrical damages at the edge of a perpendicular magnetic random access memory (MRAM) cell on the spin-transfer-torque (STT) efficiency that is defined by the ratio of thermal stability factor to switching current. We find that the switching mode of an edge-damaged cell is different from that of an undamaged cell, which results in a sizable reduction in the switching current. Together with a marginal reduction of the thermal stability factor of an edge-damaged cell, this feature makes the STT efficiency large. Our results suggest that a precise edge control is viable formore » the optimization of STT-MRAM.« less

  6. SLIC superpixels compared to state-of-the-art superpixel methods.

    PubMed

    Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine

    2012-11-01

    Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

  7. An FMM-FFT Accelerated SIE Simulator for Analyzing EM Wave Propagation in Mine Environments Loaded With Conductors

    PubMed Central

    Sheng, Weitian; Zhou, Chenming; Liu, Yang; Bagci, Hakan; Michielssen, Eric

    2018-01-01

    A fast and memory efficient three-dimensional full-wave simulator for analyzing electromagnetic (EM) wave propagation in electrically large and realistic mine tunnels/galleries loaded with conductors is proposed. The simulator relies on Muller and combined field surface integral equations (SIEs) to account for scattering from mine walls and conductors, respectively. During the iterative solution of the system of SIEs, the simulator uses a fast multipole method-fast Fourier transform (FMM-FFT) scheme to reduce CPU and memory requirements. The memory requirement is further reduced by compressing large data structures via singular value and Tucker decompositions. The efficiency, accuracy, and real-world applicability of the simulator are demonstrated through characterization of EM wave propagation in electrically large mine tunnels/galleries loaded with conducting cables and mine carts. PMID:29726545

  8. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    NASA Astrophysics Data System (ADS)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  9. Fast, noise-free memory for photon synchronization at room temperature.

    PubMed

    Finkelstein, Ran; Poem, Eilon; Michel, Ohad; Lahad, Ohr; Firstenberg, Ofer

    2018-01-01

    Future quantum photonic networks require coherent optical memories for synchronizing quantum sources and gates of probabilistic nature. We demonstrate a fast ladder memory (FLAME) mapping the optical field onto the superposition between electronic orbitals of rubidium vapor. Using a ladder-level system of orbital transitions with nearly degenerate frequencies simultaneously enables high bandwidth, low noise, and long memory lifetime. We store and retrieve 1.7-ns-long pulses, containing 0.5 photons on average, and observe short-time external efficiency of 25%, memory lifetime (1/ e ) of 86 ns, and below 10 -4 added noise photons. Consequently, coupling this memory to a probabilistic source would enhance the on-demand photon generation probability by a factor of 12, the highest number yet reported for a noise-free, room temperature memory. This paves the way toward the controlled production of large quantum states of light from probabilistic photon sources.

  10. Cognitive Control Network Contributions to Memory-Guided Visual Attention

    PubMed Central

    Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.

    2016-01-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253

  11. Neuroanatomic organization of sound memory in humans.

    PubMed

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  12. Impaired quality and efficiency of sleep impairs cognitive functioning in Addison's disease.

    PubMed

    Henry, Michelle; Ross, Ian Louis; Wolf, Pedro Sofio Abril; Thomas, Kevin Garth Flusk

    2017-04-01

    Standard replacement therapy for Addison's disease (AD) does not restore a normal circadian rhythm. Periods of sub- and supra- physiological cortisol levels experienced by patients with AD likely induce disrupted sleep. Given that healthy sleep plays an important role in memory consolidation, the novelty of the current study was to characterise, using objective measures, the relationship between sleep and memory in patients with AD, and to examine the hypothesis that poor sleep is a biological mechanism underlying memory impairment in those patients. We used a within-subjects design. Ten patients with AD and 10 matched healthy controls completed standardised neuropsychological tests assessing declarative memory (Rey Auditory Verbal Learning Test) and procedural memory (Finger Tapping Task) before and after a period of actigraphy-measured sleep, and before and after a period of waking. Relative to healthy controls, patients with AD experienced disrupted sleep characterised by poorer sleep efficiency and more time spent awake. Patients also showed impaired verbal learning and memory relative to healthy controls (p=0.007). Furthermore, whereas healthy controls' declarative memory performance benefited from a period of sleep compared to waking (p=0.032), patients with AD derived no such benefit from sleep (p=0.448). Regarding the procedural memory task, analyses detected no significant between-group differences (all p's<0.065), and neither group showed significant sleep-enhanced performance. We demonstrated, using actigraphy and standardized measures of memory performance, an association between sleep disturbances and cognitive deficits in patients with AD. These results suggest that, in patients with AD, the source of memory deficits is, at least to some extent, disrupted sleep patterns that interfere with optimal consolidation of previously-learned declarative information. Hence, treating the sleep disturbances that are frequently experienced by patients with AD may improve their cognitive functioning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Advanced light source technologies that enable high-volume manufacturing of DUV lithography extensions

    NASA Astrophysics Data System (ADS)

    Cacouris, Theodore; Rao, Rajasekhar; Rokitski, Rostislav; Jiang, Rui; Melchior, John; Burfeindt, Bernd; O'Brien, Kevin

    2012-03-01

    Deep UV (DUV) lithography is being applied to pattern increasingly finer geometries, leading to solutions like double- and multiple-patterning. Such process complexities lead to higher costs due to the increasing number of steps required to produce the desired results. One of the consequences is that the lithography equipment needs to provide higher operating efficiencies to minimize the cost increases, especially for producers of memory devices that experience a rapid decline in sales prices of these products over time. In addition to having introduced higher power 193nm light sources to enable higher throughput, we previously described technologies that also enable: higher tool availability via advanced discharge chamber gas management algorithms; improved process monitoring via enhanced on-board beam metrology; and increased depth of focus (DOF) via light source bandwidth modulation. In this paper we will report on the field performance of these technologies with data that supports the desired improvements in on-wafer performance and operational efficiencies.

  14. A GPU accelerated PDF transparency engine

    NASA Astrophysics Data System (ADS)

    Recker, John; Lin, I.-Jong; Tastl, Ingeborg

    2011-01-01

    As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.

  15. An Energy-Aware Runtime Management of Multi-Core Sensory Swarms.

    PubMed

    Kim, Sungchan; Yang, Hoeseok

    2017-08-24

    In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today's sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique.

  16. An Energy-Aware Runtime Management of Multi-Core Sensory Swarms

    PubMed Central

    Kim, Sungchan

    2017-01-01

    In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today’s sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique. PMID:28837094

  17. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems.

    PubMed

    Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas

    2017-08-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.

  18. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems

    PubMed Central

    Montón, Luis Gardeazabal

    2017-01-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices. PMID:28763013

  19. Impact of Remote Monitoring on Clinical Outcomes.

    PubMed

    Varma, Niraj; Ricci, Renato Pietro

    2015-12-01

    Follow-up of patients with cardiac implantable electronic devices is challenging due to both their increasing volume and technical complexity coupled to increasing clinical complexity of recipient patients. Remote monitoring (RM) offers an opportunity to resolve some of these difficulties by improving clinic efficiencies and providing a mechanism for device monitoring and patient management. Several recent randomized clinical trials and registries have demonstrated that RM may reduce in-hospital visit numbers, time required for patient follow-up, physician and nurse time, and hospital and social costs. Furthermore, patient retention and adherence to follow-up schedule are significantly improved by RM. Continuous wireless monitoring of data stored in the device memory with automatic alerts allows early detection of device malfunctions and of events, such as atrial fibrillation, ventricular arrhythmias, and heart failure suitable for clinical intervention. Early reaction may improve patient outcome. RM is easy to use and patients showed a high level of acceptance and satisfaction. Implementing RM in daily practice may require changes in clinic workflow. New organizational models promote significant efficiencies regarding physician and nursing time. Data management techniques are under development. Despite these demonstrable advantages of RM, adoption still remains modest, even in health care systems incentivized to use this follow-up method. © 2015 Wiley Periodicals, Inc.

  20. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  1. Pushing typists back on the learning curve: Memory chunking in the hierarchical control of skilled typewriting.

    PubMed

    Yamaguchi, Motonori; Logan, Gordon D

    2016-12-01

    Hierarchical control of skilled performance depends on the ability of higher level control to process several lower level units as a single chunk. The present study investigated the development of hierarchical control of skilled typewriting, focusing on the process of memory chunking. In the first 3 experiments, skilled typists typed words or nonwords under concurrent memory load. Memory chunks developed and consolidated into long-term memory when the same typing materials were repeated in 6 consecutive trials, but chunks did not develop when repetitions were spaced. However, when concurrent memory load was removed during training, memory chunks developed more efficiently with longer lags between repetitions than shorter lags. From these results, it is proposed that memory chunking requires 2 representations of the same letter string to be maintained simultaneously in short-term memory: 1 representation from the current trial, and the other from an earlier trial that is either retained from the immediately preceding trial or retrieved from long-term memory (i.e., study state retrieval). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Undermining belief in false memories leads to less efficient problem-solving behaviour.

    PubMed

    Wang, Jianqin; Otgaar, Henry; Howe, Mark L; Smeets, Tom; Merckelbach, Harald; Nahouli, Zacharia

    2017-08-01

    Memories of events for which the belief in the occurrence of those events is undermined, but recollection is retained, are called nonbelieved memories (NBMs). The present experiments examined the effects of NBMs on subsequent problem-solving behaviour. In Experiment 1, we challenged participants' beliefs in their memories and examined whether NBMs affected subsequent solution rates on insight-based problems. True and false memories were elicited using the Deese/Roediger-McDermott (DRM) paradigm. Then participants' belief in true and false memories was challenged by telling them the item had not been presented. We found that when the challenge led to undermining belief in false memories, fewer problems were solved than when belief was not challenged. In Experiment 2, a similar procedure was used except that some participants solved the problems one week rather than immediately after the feedback. Again, our results showed that undermining belief in false memories resulted in lower problem solution rates. These findings suggest that for false memories, belief is an important agent in whether memories serve as effective primes for immediate and delayed problem-solving.

  3. Combined Cognitive Training vs. Memory Strategy Training in Healthy Older Adults

    PubMed Central

    Li, Bing; Zhu, Xinyi; Hou, Jianhua; Chen, Tingji; Wang, Pengyun; Li, Juan

    2016-01-01

    As mnemonic utilization deficit in older adults associates with age-related decline in executive function, we hypothesized that memory strategy training combined with executive function training might induce larger training effect in memory and broader training effects in non-memory outcomes than pure memory training. The present study compared the effects of combined cognitive training (executive function training plus memory strategy training) to pure memory strategy training. Forty healthy older adults were randomly assigned to a combined cognitive training group or a memory strategy training group. A control group receiving no training was also included. Combined cognitive training group received 16 sessions of training (eight sessions of executive function training followed by eight sessions of memory strategy training). Memory training group received 16 sessions of memory strategy training. The results partly supported our hypothesis in that indeed improved performance on executive function was only found in combined training group, whereas memory performance increased less in combined training compared to memory strategy group. Results suggest that combined cognitive training may be less efficient than pure memory training in memory outcomes, though the influences from insufficient training time and less closeness between trained executive function and working memory could not be excluded; however it has broader training effects in non-memory outcomes. Clinical Trial Registration: www.chictr.org.cn, identifier ChiCTR-OON-16007793. PMID:27375521

  4. Functional cross‐hemispheric shift between object‐place paired associate memory and spatial memory in the human hippocampus

    PubMed Central

    Lee, Choong‐Hee; Ryu, Jungwon; Lee, Sang‐Hun; Kim, Hakjin

    2016-01-01

    ABSTRACT The hippocampus plays critical roles in both object‐based event memory and spatial navigation, but it is largely unknown whether the left and right hippocampi play functionally equivalent roles in these cognitive domains. To examine the hemispheric symmetry of human hippocampal functions, we used an fMRI scanner to measure BOLD activity while subjects performed tasks requiring both object‐based event memory and spatial navigation in a virtual environment. Specifically, the subjects were required to form object‐place paired associate memory after visiting four buildings containing discrete objects in a virtual plus maze. The four buildings were visually identical, and the subjects used distal visual cues (i.e., scenes) to differentiate the buildings. During testing, the subjects were required to identify one of the buildings when cued with a previously associated object, and when shifted to a random place, the subject was expected to navigate to the previously chosen building. We observed that the BOLD activity foci changed from the left hippocampus to the right hippocampus as task demand changed from identifying a previously seen object (object‐cueing period) to searching for its paired‐associate place (object‐cued place recognition period). Furthermore, the efficient retrieval of object‐place paired associate memory (object‐cued place recognition period) was correlated with the BOLD response of the left hippocampus, whereas the efficient retrieval of relatively pure spatial memory (spatial memory period) was correlated with the right hippocampal BOLD response. These findings suggest that the left and right hippocampi in humans might process qualitatively different information for remembering episodic events in space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27009679

  5. Reduced prefrontal efficiency for visuospatial working memory in attention-deficit/hyperactivity disorder.

    PubMed

    Bédard, Anne-Claude V; Newcorn, Jeffrey H; Clerkin, Suzanne M; Krone, Beth; Fan, Jin; Halperin, Jeffrey M; Schulz, Kurt P

    2014-09-01

    Visuospatial working memory impairments have been implicated in the pathophysiology of attention-deficit/hyperactivity disorder (ADHD). However, most ADHD research has focused on the neural correlates of nonspatial mnemonic processes. This study examined brain activation and functional connectivity for visuospatial working memory in youth with and without ADHD. Twenty-four youth with ADHD and 21 age- and sex-matched healthy controls were scanned with functional magnetic resonance imaging while performing an N-back test of working memory for spatial position. Block-design analyses contrasted activation and functional connectivity separately for high (2-back) and low (1-back) working memory load conditions versus the control condition (0-back). The effect of working memory load was modeled with linear contrasts. The 2 groups performed comparably on the task and demonstrated similar patterns of frontoparietal activation, with no differences in linear gains in activation as working memory load increased. However, youth with ADHD showed greater activation in the left dorsolateral prefrontal cortex (DLPFC) and left posterior cingulate cortex (PCC), greater functional connectivity between the left DLPFC and left intraparietal sulcus, and reduced left DLPFC connectivity with left midcingulate cortex and PCC for the high load contrast compared to controls (p < .01; k > 100 voxels). Reanalysis using a more conservative statistical approach (p < .001; k > 100 voxels) yielded group differences in PCC activation and DLPFC-midcingulate connectivity. Youth with ADHD show decreased efficiency of DLPFC for high-load visuospatial working memory and greater reliance on posterior spatial attention circuits to store and update spatial position than healthy control youth. Findings should be replicated in larger samples. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  6. Memory is Not Enough: The Neurobiological Substrates of Dynamic Cognitive Reserve.

    PubMed

    Serra, Laura; Bruschini, Michela; Di Domenico, Carlotta; Gabrielli, Giulia Bechi; Marra, Camillo; Caltagirone, Carlo; Cercignani, Mara; Bozzali, Marco

    2017-01-01

    Changes in the residual memory variance are considered as a dynamic aspect of cognitive reserve (d-CR). We aimed to investigate for the first time the neural substrate associated with changes in the residual memory variance overtime in patients with amnestic mild cognitive impairment (aMCI). Thirty-four aMCI patients followed-up for 36 months and 48 healthy elderly individuals (HE) were recruited. All participants underwent 3T MRI, collecting T1-weighted images for voxel-based morphometry (VBM). They underwent an extensive neuropsychological battery, including six episodic memory tests. In patients and controls, factor analyses were used on the episodic memory scores to obtain a composite memory score (C-MS). Partial Least Square analyses were used to decompose the variance of C-MS in latent variables (LT scores), accounting for demographic variables and for the general cognitive efficiency level; linear regressions were applied on LT scores, striping off any contribution of general cognitive abilities, to obtain the residual value of memory variance, considered as an index of d-CR. LT scores and d-CR were used in discriminant analysis, in patients only. Finally, LT scores and d-CR were used as variable of interest in VBM analysis. The d-CR score was not able to correctly classify patients. In both aMCI patients and HE, LT1st and d-CR scores showed correlations with grey matter volumes in common and in specific brain areas. Using CR measures limited to assess memory function is likely less sensitive to detect the cognitive decline and predict the evolution of Alzheimer's disease. In conclusion, d-CR needs a measure of general cognition to identify conversion to Alzheimer's disease efficiently.

  7. Memory-Efficient Analysis of Dense Functional Connectomes.

    PubMed

    Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.

  8. Memory-Efficient Analysis of Dense Functional Connectomes

    PubMed Central

    Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565

  9. Impairments in Component Processes of Executive Function and Episodic Memory in Alcoholism, HIV Infection, and HIV Infection with Alcoholism Comorbidity.

    PubMed

    Fama, Rosemary; Sullivan, Edith V; Sassoon, Stephanie A; Pfefferbaum, Adolf; Zahr, Natalie M

    2016-12-01

    Executive functioning and episodic memory impairment occur in HIV infection (HIV) and chronic alcoholism (ALC). Comorbidity of these conditions (HIV + ALC) is prevalent and heightens risk of vulnerability to separate and compounded deficits. Age and disease-related variables can also serve as mediators of cognitive impairment and should be considered, given the extended longevity of HIV-infected individuals in this era of improved pharmacological therapy. HIV, ALC, HIV + ALC, and normal controls (NC) were administered traditional and computerized tests of executive function and episodic memory. Test scores were expressed as age- and education-corrected Z-scores; selective tests were averaged to compute Executive Function and Episodic Memory Composite scores. Efficiency scores were calculated for tests with accuracy and response times. HIV, ALC, and HIV + ALC had lower scores than NC on Executive Function and Episodic Memory Composites, with HIV + ALC even lower than ALC and HIV on the Episodic Memory Composite. Impairments in planning and free recall of visuospatial material were observed in ALC, whereas impairments in psychomotor speed, sequencing, narrative free recall, and pattern recognition were observed in HIV. Lower decision-making efficiency scores than NC occurred in all 3 clinical groups. In ALC, age and lifetime alcohol consumption were each unique predictors of Executive Function and Episodic Memory Composite scores. In HIV + ALC, age was a unique predictor of Episodic Memory Composite score. Disease-specific and disease-overlapping patterns of impairment in HIV, ALC, and HIV + ALC have implications regarding brain systems disrupted by each disease and clinical ramifications regarding the complexities and compounded damping of cognitive functioning associated with dual diagnosis that may be exacerbated with aging. Copyright © 2016 by the Research Society on Alcoholism.

  10. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  11. Reimagining Reading: Creating a Classroom Culture That Embraces Independent Choice Reading

    ERIC Educational Resources Information Center

    Dickerson, Katie

    2015-01-01

    Many of us are plagued by negative memories of sustained silent reading. In some of these memories, we are the students, attempting to read a book that didn't hold our interest or trying to read over the din of our disengaged classmates. In other memories, we are the teachers, suffering through a ten-minute classroom management nightmare, deciding…

  12. Formal verification of an MMU and MMU cache

    NASA Technical Reports Server (NTRS)

    Schubert, E. T.

    1991-01-01

    We describe the formal verification of a hardware subsystem consisting of a memory management unit and a cache. These devices are verified independently and then shown to interact correctly when composed. The MMU authorizes memory requests and translates virtual addresses to real addresses. The cache improves performance by maintaining a LRU (least recently used) list from the memory resident segment table.

  13. Reprogrammable field programmable gate array with integrated system for mitigating effects of single event upsets

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2010-01-01

    An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.

  14. Working memory capacity predicts listwise directed forgetting in adults and children.

    PubMed

    Aslan, Alp; Zellner, Martina; Bäuml, Karl-Heinz T

    2010-05-01

    In listwise directed forgetting, participants are cued to forget previously studied material and to learn new material instead. Such cueing typically leads to forgetting of the first set of material and to memory enhancement of the second. The present study examined the role of working memory capacity in adults' and children's listwise directed forgetting. Working memory capacity was assessed with complex span tasks. In Experiment 1 working memory capacity predicted young adults' directed-forgetting performance, demonstrating a positive relationship between working memory capacity and each of the two directed-forgetting effects. In Experiment 2 we replicated the finding with a sample of first and a sample of fourth-grade children, and additionally showed that working memory capacity can account for age-related increases in directed-forgetting efficiency between the two age groups. Following the view that directed forgetting is mediated by inhibition of the first encoded list, the results support the proposal of a close link between working memory capacity and inhibitory function.

  15. Holographic implementation of a binary associative memory for improved recognition

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.

    1998-03-01

    Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.

  16. The role of memory for visual search in scenes

    PubMed Central

    Võ, Melissa Le-Hoa; Wolfe, Jeremy M.

    2014-01-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. While a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. PMID:25684693

  17. High-speed noise-free optical quantum memory

    NASA Astrophysics Data System (ADS)

    Kaczmarek, K. T.; Ledingham, P. M.; Brecht, B.; Thomas, S. E.; Thekkadath, G. S.; Lazo-Arjona, O.; Munns, J. H. D.; Poem, E.; Feizpour, A.; Saunders, D. J.; Nunn, J.; Walmsley, I. A.

    2018-04-01

    Optical quantum memories are devices that store and recall quantum light and are vital to the realization of future photonic quantum networks. To date, much effort has been put into improving storage times and efficiencies of such devices to enable long-distance communications. However, less attention has been devoted to building quantum memories which add zero noise to the output. Even small additional noise can render the memory classical by destroying the fragile quantum signatures of the stored light. Therefore, noise performance is a critical parameter for all quantum memories. Here we introduce an intrinsically noise-free quantum memory protocol based on two-photon off-resonant cascaded absorption (ORCA). We demonstrate successful storage of GHz-bandwidth heralded single photons in a warm atomic vapor with no added noise, confirmed by the unaltered photon-number statistics upon recall. Our ORCA memory meets the stringent noise requirements for quantum memories while combining high-speed and room-temperature operation with technical simplicity, and therefore is immediately applicable to low-latency quantum networks.

  18. The role of memory for visual search in scenes.

    PubMed

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  19. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  20. Learning Efficiency: Identifying Individual Differences in Learning Rate and Retention in Healthy Adults.

    PubMed

    Zerr, Christopher L; Berg, Jeffrey J; Nelson, Steven M; Fishell, Andrew K; Savalia, Neil K; McDermott, Kathleen B

    2018-06-01

    People differ in how quickly they learn information and how long they remember it, yet individual differences in learning abilities within healthy adults have been relatively neglected. In two studies, we examined the relation between learning rate and subsequent retention using a new foreign-language paired-associates task (the learning-efficiency task), which was designed to eliminate ceiling effects that often accompany standardized tests of learning and memory in healthy adults. A key finding was that quicker learners were also more durable learners (i.e., exhibited better retention across a delay), despite studying the material for less time. Additionally, measures of learning and memory from this task were reliable in Study 1 ( N = 281) across 30 hr and Study 2 ( N = 92; follow-up n = 46) across 3 years. We conclude that people vary in how efficiently they learn, and we describe a reliable and valid method for assessing learning efficiency within healthy adults.

  1. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  2. POLARIS: Agent-based modeling framework development and implementation for integrated travel demand and network and operations simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auld, Joshua; Hope, Michael; Ley, Hubert

    This paper discusses the development of an agent-based modelling software development kit, and the implementation and validation of a model using it that integrates dynamic simulation of travel demand, network supply and network operations. A description is given of the core utilities in the kit: a parallel discrete event engine, interprocess exchange engine, and memory allocator, as well as a number of ancillary utilities: visualization library, database IO library, and scenario manager. The overall framework emphasizes the design goals of: generality, code agility, and high performance. This framework allows the modeling of several aspects of transportation system that are typicallymore » done with separate stand-alone software applications, in a high-performance and extensible manner. The issue of integrating such models as dynamic traffic assignment and disaggregate demand models has been a long standing issue for transportation modelers. The integrated approach shows a possible way to resolve this difficulty. The simulation model built from the POLARIS framework is a single, shared-memory process for handling all aspects of the integrated urban simulation. The resulting gains in computational efficiency and performance allow planning models to be extended to include previously separate aspects of the urban system, enhancing the utility of such models from the planning perspective. Initial tests with case studies involving traffic management center impacts on various network events such as accidents, congestion and weather events, show the potential of the system.« less

  3. Intraday return inefficiency and long memory in the volatilities of forex markets and the role of trading volume

    NASA Astrophysics Data System (ADS)

    Shahzad, Syed Jawad Hussain; Hernandez, Jose Areola; Hanif, Waqas; Kayani, Ghulam Mujtaba

    2018-09-01

    We investigate the dynamics of efficiency and long memory, and the impact of trading volume on the efficiency of returns and volatilities of four major traded currencies, namely, the EUR, GBP, CHF and JPY. We do so by implementing full sample and rolling window multifractal detrended fluctuation analysis (MF-DFA) and a quantile-on-quantile (QQ) approach. This paper sheds new light by employing high frequency (5-min interval) data spanning from Jan 1, 2007 to Dec 31, 2016. Realized volatilities are estimated using Andersen et al.'s (2001) measure, while the QQ method employed is drawn from Sim and Zhou (2015). We find evidence of higher efficiency levels in the JPY and CHF currency markets. The impact of trading volume on efficiency is only significant for the JPY and CHF currencies. The GBP currency appears to be the least efficient, followed by the EUR. Implications of the results are discussed.

  4. Tonic Inhibitory Control of Dentate Gyrus Granule Cells by α5-Containing GABAA Receptors Reduces Memory Interference.

    PubMed

    Engin, Elif; Zarnowska, Ewa D; Benke, Dietmar; Tsvetkov, Evgeny; Sigal, Maksim; Keist, Ruth; Bolshakov, Vadim Y; Pearce, Robert A; Rudolph, Uwe

    2015-10-07

    Interference between similar or overlapping memories formed at different times poses an important challenge on the hippocampal declarative memory system. Difficulties in managing interference are at the core of disabling cognitive deficits in neuropsychiatric disorders. Computational models have suggested that, in the normal brain, the sparse activation of the dentate gyrus granule cells maintained by tonic inhibitory control enables pattern separation, an orthogonalization process that allows distinct representations of memories despite interference. To test this mechanistic hypothesis, we generated mice with significantly reduced expression of the α5-containing GABAA (α5-GABAARs) receptors selectively in the granule cells of the dentate gyrus (α5DGKO mice). α5DGKO mice had reduced tonic inhibition of the granule cells without any change in fast phasic inhibition and showed increased activation in the dentate gyrus when presented with novel stimuli. α5DGKO mice showed impairments in cognitive tasks characterized by high interference, without any deficiencies in low-interference tasks, suggesting specific impairment of pattern separation. Reduction of fast phasic inhibition in the dentate gyrus through granule cell-selective knock-out of α2-GABAARs or the knock-out of the α5-GABAARs in the downstream CA3 area did not detract from pattern separation abilities, which confirms the anatomical and molecular specificity of the findings. In addition to lending empirical support to computational hypotheses, our findings have implications for the treatment of interference-related cognitive symptoms in neuropsychiatric disorders, particularly considering the availability of pharmacological agents selectively targeting α5-GABAARs. Interference between similar memories poses a significant limitation on the hippocampal declarative memory system, and impaired interference management is a cognitive symptom in many disorders. Thus, understanding mechanisms of successful interference management or processes that can lead to interference-related memory problems has high theoretical and translational importance. This study provides empirical evidence that tonic inhibition in the dentate gyrus (DG), which maintains sparseness of neuronal activation in the DG, is essential for management of interference. The specificity of findings to tonic, but not faster, more transient types of neuronal inhibition and to the DG, but not the neighboring brain areas, is presented through control experiments. Thus, the findings link interference management to a specific mechanism, proposed previously by computational models. Copyright © 2015 the authors 0270-6474/15/3513699-15$15.00/0.

  5. gadfly: A pandas-based Framework for Analyzing GADGET Simulation Data

    NASA Astrophysics Data System (ADS)

    Hummel, Jacob A.

    2016-11-01

    We present the first public release (v0.1) of the open-source gadget Dataframe Library: gadfly. The aim of this package is to leverage the capabilities of the broader python scientific computing ecosystem by providing tools for analyzing simulation data from the astrophysical simulation codes gadget and gizmo using pandas, a thoroughly documented, open-source library providing high-performance, easy-to-use data structures that is quickly becoming the standard for data analysis in python. Gadfly is a framework for analyzing particle-based simulation data stored in the HDF5 format using pandas DataFrames. The package enables efficient memory management, includes utilities for unit handling, coordinate transformations, and parallel batch processing, and provides highly optimized routines for visualizing smoothed-particle hydrodynamics data sets.

  6. Field Test of Boiler Primary Loop Temperature Controller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glanville, P.; Rowley, P.; Schroeder, D.

    Beyond these initial system efficiency upgrades are an emerging class of Advanced Load Monitoring (ALM) aftermarket controllers that dynamically respond to the boiler load, with claims of 10% to 30% of fuel savings over a heating season. For hydronic boilers specifically, these devices perform load monitoring, with continuous measurement of supply and, in some cases, return water temperatures. Energy savings from these ALM controllers are derived from dynamic management of the boiler differential, where a microprocessor with memory of past boiler cycles prevents the boiler from firing for a period of time, to limit cycling losses and inefficient operation duringmore » perceived low load conditions. These differ from OTR controllers, which vary boiler setpoint temperatures with ambient conditions while maintaining a fixed differential.« less

  7. Memory CD4+ T cells: beyond “helper” functions

    PubMed Central

    Boonnak, Kobporn; Subbarao, Kanta

    2012-01-01

    In influenza virus infection, antibodies, memory CD8+ T cells, and CD4+ T cells have all been shown to mediate immune protection, but how they operate and interact with one another to mediate efficient immune responses against virus infection is not well understood. In this issue of the JCI, McKinstry et al. have identified unique functions of memory CD4+ T cells beyond providing “help” for B cell and CD8+ T cell responses during influenza virus infection. PMID:22820285

  8. Development and Evaluation of a Casualty Evacuation Model for a European Conflict.

    DTIC Science & Technology

    1985-12-01

    EVAC, the computer code which implements our technique, has been used to solve a series of test problems in less time and requiring less memory than...the order of 1/K the amount of main memory for a K-commodity problem, so it can solve significantly larger problems than MCNF. I . 10 CHAPTER II A...technique may require only half the memory of the general L.P. package [6]. These advances are due to the efficient data structures which have been

  9. Survey State of the Art: Electrical Load Management Techniques and Equipment.

    DTIC Science & Technology

    1986-10-31

    automobiles and even appliances. Applications in the area of demand and energy management have been multifaceted, given the needs involved and rapid paybacks...copy of the programming to be reloaded into the controller at any time and by designing this module with erasable and reprogrammable memory, the...points and performs DDC programming is stored in (direct digital control) of output reprogrammable , permanent memory points. A RIM may accommodate up

  10. Processing Efficiency in Preschoolers' Memory Span: Individual Differences Related to Age and Anxiety

    ERIC Educational Resources Information Center

    Visu-Petra, Laura; Miclea, Mircea; Cheie, Lavinia; Benga, Oana

    2009-01-01

    In self-paced auditory memory span tasks, the microanalysis of response timing measures represents a developmentally sensitive measure, providing insights into the development of distinct processing rates during recall performance. The current study first examined the effects of age and trait anxiety on span accuracy (effectiveness) and response…

  11. An Integrated Decision-Making Framework for Sustainability Assessment: A Case Study of Memorial University

    ERIC Educational Resources Information Center

    Waheed, Bushra; Khan, Faisal; Veitch, Brian; Hawboldt, Kelly

    2011-01-01

    This article presents an overview of the sustainability initiatives at the St. John's campus of Memorial University in Newfoundland and Labrador (Canada). The key initiatives include setting a realistic goal for energy efficiency, becoming carbon neutral, and conducting various research and outreach projects related to sustainability. As…

  12. Working Memory Deficits, Increased Anxiety-Like Traits, and Seizure Susceptibility in BDNF Overexpressing Mice

    ERIC Educational Resources Information Center

    Papaleo, Francesco; Silverman, Jill L.; Aney, Jordan; Tian, Qingjun; Barkan, Charlotte L.; Chadman, Kathryn K.; Crawley, Jacqueline N.

    2011-01-01

    BDNF regulates components of cognitive processes and has been implicated in psychiatric disorders. Here we report that genetic overexpression of the BDNF mature isoform (BDNF-tg) in female mice impaired working memory functions while sparing components of fear conditioning. BDNF-tg mice also displayed reduced breeding efficiency, higher…

  13. A multiresolution halftoning algorithm for progressive display

    NASA Astrophysics Data System (ADS)

    Mukherjee, Mithun; Sharma, Gaurav

    2005-01-01

    We describe and implement an algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.

  14. Visual Search Elicits the Electrophysiological Marker of Visual Working Memory

    PubMed Central

    Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne

    2009-01-01

    Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663

  15. Age differences in memory control: evidence from updating and retrieval-practice tasks.

    PubMed

    Lechuga, Maria Teresa; Moreno, Virginia; Pelegrina, Santiago; Gómez-Ariza, Carlos J; Bajo, Maria Teresa

    2006-11-01

    Some contemporary approaches suggest that inhibitory mechanisms play an important role in cognitive development. In addition, several authors distinguish between intentional and unintentional inhibitory processes in cognition. We report two experiments aimed at exploring possible developmental changes in these two types of inhibitory mechanisms. In Experiment 1, an updating task was used. This task requires that participants intentionally suppress irrelevant information from working memory. In Experiment 2, the retrieval-practice task was used. Retrieval practice of a subset of studied items is thought to involve unintentional inhibitory processes to overcome interference from competing memories. As a result, suppressed items become forgotten in a later memory test. Results of the experiments indicated that younger children (8) were less efficient than older children (12) and adults at intentionally suppressing information (updating task). However, when the task required unintentional inhibition of competing items (retrieval-practice task), this developmental trend was not found and children and adults showed similar levels of retrieval-induced forgetting. The results are discussed in terms of the development of efficient inhibition and the distinction between intentional and unintentional inhibitions.

  16. Distributed-Memory Fast Maximal Independent Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less

  17. An English Vocabulary Learning System Based on Fuzzy Theory and Memory Cycle

    NASA Astrophysics Data System (ADS)

    Wang, Tzone I.; Chiu, Ti Kai; Huang, Liang Jun; Fu, Ru Xuan; Hsieh, Tung-Cheng

    This paper proposes an English Vocabulary Learning System based on the Fuzzy Theory and the Memory Cycle Theory to help a learner to memorize vocabularies easily. By using fuzzy inferences and personal memory cycles, it is possible to find an article that best suits a learner. After reading an article, a quiz is provided for the learner to improve his/her memory of the vocabulary in the article. Early researches use just explicit response (ex. quiz exam) to update memory cycles of newly learned vocabulary; apart from that approach, this paper proposes a methodology that also modify implicitly the memory cycles of learned word. By intensive reading of articles recommended by our approach, a learner learns new words quickly and reviews learned words implicitly as well, and by which the vocabulary ability of the learner improves efficiently.

  18. Unconditional room-temperature quantum memory

    NASA Astrophysics Data System (ADS)

    Hosseini, M.; Campbell, G.; Sparkes, B. M.; Lam, P. K.; Buchler, B. C.

    2011-10-01

    Just as classical information systems require buffers and memory, the same is true for quantum information systems. The potential that optical quantum information processing holds for revolutionizing computation and communication is therefore driving significant research into developing optical quantum memory. A practical optical quantum memory must be able to store and recall quantum states on demand with high efficiency and low noise. Ideally, the platform for the memory would also be simple and inexpensive. Here, we present a complete tomographic reconstruction of quantum states that have been stored in the ground states of rubidium in a vapour cell operating at around 80°C. Without conditional measurements, we show recall fidelity up to 98% for coherent pulses containing around one photon. To unambiguously verify that our memory beats the quantum no-cloning limit we employ state-independent verification using conditional variance and signal-transfer coefficients.

  19. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  20. Age-specific effects of voluntary exercise on memory and the older brain.

    PubMed

    Siette, Joyce; Westbrook, R Frederick; Cotman, Carl; Sidhu, Kuldip; Zhu, Wanlin; Sachdev, Perminder; Valenzuela, Michael J

    2013-03-01

    Physical exercise in early adulthood and mid-life improves cognitive function and enhances brain plasticity, but the effects of commencing exercise in late adulthood are not well-understood. We investigated the effects of voluntary exercise in the restoration of place recognition memory in aged rats and examined hippocampal changes of synaptic density and neurogenesis. We found a highly selective age-related deficit in place recognition memory that is stable across retest sessions and correlates strongly with loss of hippocampal synapses. Additionally, 12 weeks of voluntary running at 20 months of age removed the deficit in the hippocampally dependent place recognition memory. Voluntary running restored presynaptic density in the dentate gyrus and CA3 hippocampal subregions in aged rats to levels beyond those observed in younger animals, in which exercise had no functional or synaptic effects. By contrast, hippocampal neurogenesis, a possible memory-related mechanism, increased in both young and aged rats after physical exercise but was not linked with performance in the place recognition task. We used graph-based network analysis based on synaptic covariance patterns to characterize efficient intrahippocampal connectivity. This analysis revealed that voluntary running completely reverses the profound degradation of hippocampal network efficiency that accompanies sedentary aging. Furthermore, at an individual animal level, both overall hippocampal presynaptic density and subregional connectivity independently contribute to prediction of successful place recognition memory performance. Our findings emphasize the unique synaptic effects of exercise on the aged brain and their specific relevance to a hippocampally based memory system for place recognition. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Siemens, Philips megaproject to yield superchip in 5 years

    NASA Astrophysics Data System (ADS)

    1985-02-01

    The development of computer chips using complementary metal oxide semiconductor (CMOS) memory technology is described. The management planning and marketing strategy of the Philips and Siemens corporations with regard to the memory chip are discussed.

  2. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  3. Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A [Rochester, MN; Chen, Dong [Croton On Hudson, NY; Gooding, Thomas [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Parker, Jeff [Rochester, MN

    2012-02-07

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  4. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  5. Next Generation Mass Memory Architecture

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Stahle, M.; Lonsdorfer, U.; Binzer, N.

    2010-08-01

    Future Mass Memory units will have to cope with various demanding requirements driven by onboard instruments (optical and SAR) that generate a huge amount of data (>10TBit) at a data rate > 6 Gbps. For downlink data rates around 3 Gbps will be feasible using latest ka-band technology together with Variable Coding and Modulation (VCM) techniques. These high data rates and storage capacities need to be effectively managed. Therefore, data structures and data management functions have to be improved and adapted to existing standards like the Packet Utilisation Standard (PUS). In this paper we will present a highly modular and scalable architectural approach for mass memories in order to support a wide range of mission requirements.

  6. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  7. Complementary-encoding holographic associative memory using a photorefractive crystal

    NASA Astrophysics Data System (ADS)

    Yuan, ShiFu; Wu, Minxian; Yan, Yingbai; Jin, Guofan

    1996-06-01

    We present a holographic implementation of accurate associative memory with only one holographic memory system. In the implementation, the stored and test images are coded by using complementary-encoding method. The recalled complete image is also a coded image that can be decoded with a decoding mask to get an original image or its complement image. The experiment shows that the complementary encoding can efficiently increase the addressing accuracy in a simple way. Instead of the above complementary-encoding method, a scheme that uses complementary area-encoding method is also proposed for the holographic implementation of gray-level image associative memory with accurate addressing.

  8. Efficient packing of patterns in sparse distributed memory by selective weighting of input bits

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1991-01-01

    When a set of patterns is stored in a distributed memory, any given storage location participates in the storage of many patterns. From the perspective of any one stored pattern, the other patterns act as noise, and such noise limits the memory's storage capacity. The more similar the retrieval cues for two patterns are, the more the patterns interfere with each other in memory, and the harder it is to separate them on retrieval. A method is described of weighting the retrieval cues to reduce such interference and thus to improve the separability of patterns that have similar cues.

  9. Memory and Spin Injection Devices Involving Half Metals

    DOE PAGES

    Shaughnessy, M.; Snow, Ryan; Damewood, L.; ...

    2011-01-01

    We suggest memory and spin injection devices fabricated with half-metallic materials and based on the anomalous Hall effect. Schematic diagrams of the memory chips, in thin film and bulk crystal form, are presented. Spin injection devices made in thin film form are also suggested. These devices do not need any external magnetic field but make use of their own magnetization. Only a gate voltage is needed. The carriers are 100% spin polarized. Memory devices may potentially be smaller, faster, and less volatile than existing ones, and the injection devices may be much smaller and more efficient than existing spin injectionmore » devices.« less

  10. Programming distributed memory architectures using Kali

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, in part because of the relatively low level of current programming environments for such machines. A new programming environment is presented, Kali, which provides a global name space and allows direct access to remote data values. In order to retain efficiency, Kali provides a system on annotations, allowing the user to control those aspects of the program critical to performance, such as data distribution and load balancing. The primitives and constructs provided by the language is described, and some of the issues raised in translating a Kali program for execution on distributed memory systems are also discussed.

  11. Portable Electromyograph

    NASA Technical Reports Server (NTRS)

    De Luca, Gianluca; De Luca, Carlo J.; Bergman, Per

    2004-01-01

    A portable electronic apparatus records electromyographic (EMG) signals in as many as 16 channels at a sampling rate of 1,024 Hz in each channel. The apparatus (see figure) includes 16 differential EMG electrodes (each electrode corresponding to one channel) with cables and attachment hardware, reference electrodes, an input/output-and-power-adapter unit, a 16-bit analog-to-digital converter, and a hand-held computer that contains a removable 256-MB flash memory card. When all 16 EMG electrodes are in use, full-bandwidth data can be recorded in each channel for as long as 8 hours. The apparatus is powered by a battery and is small enough that it can be carried in a waist pouch. The computer is equipped with a small screen that can be used to display the incoming signals on each channel. Amplitude and time adjustments of this display can be made easily by use of touch buttons on the screen. The user can also set up a data-acquisition schedule to conform to experimental protocols or to manage battery energy and memory efficiently. Once the EMG data have been recorded, the flash memory card is removed from the EMG apparatus and placed in a flash-memory- card-reading external drive unit connected to a personal computer (PC). The PC can then read the data recorded in the 16 channels. Preferably, before further analysis, the data should be stored in the hard drive of the PC. The data files are opened and viewed on the PC by use of special- purpose software. The software for operation of the apparatus resides in a random-access memory (RAM), with backup power supplied by a small internal lithium cell. A backup copy of this software resides on the flash memory card. In the event of loss of both main and backup battery power and consequent loss of this software, the backup copy can be used to restore the RAM copy after power has been restored. Accessories for this device are also available. These include goniometers, accelerometers, foot switches, and force gauges.

  12. Evolutionary Metal Oxide Clusters for Novel Applications: Toward High-Density Data Storage in Nonvolatile Memories.

    PubMed

    Chen, Xiaoli; Zhou, Ye; Roy, Vellaisamy A L; Han, Su-Ting

    2018-01-01

    Because of current fabrication limitations, miniaturizing nonvolatile memory devices for managing the explosive increase in big data is challenging. Molecular memories constitute a promising candidate for next-generation memories because their properties can be readily modulated through chemical synthesis. Moreover, these memories can be fabricated through mild solution processing, which can be easily scaled up. Among the various materials, polyoxometalate (POM) molecules have attracted considerable attention for use as novel data-storage nodes for nonvolatile memories. Here, an overview of recent advances in the development of POMs for nonvolatile memories is presented. The general background knowledge of the structure and property diversity of POMs is also summarized. Finally, the challenges and perspectives in the application of POMs in memories are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Implementing Connected Component Labeling as a User Defined Operator for SciDB

    NASA Technical Reports Server (NTRS)

    Oloso, Amidu; Kuo, Kwo-Sen; Clune, Thomas; Brown, Paul; Poliakov, Alex; Yu, Hongfeng

    2016-01-01

    We have implemented a flexible User Defined Operator (UDO) for labeling connected components of a binary mask expressed as an array in SciDB, a parallel distributed database management system based on the array data model. This UDO is able to process very large multidimensional arrays by exploiting SciDB's memory management mechanism that efficiently manipulates arrays whose memory requirements far exceed available physical memory. The UDO takes as primary inputs a binary mask array and a binary stencil array that specifies the connectivity of a given cell to its neighbors. The UDO returns an array of the same shape as the input mask array with each foreground cell containing the label of the component it belongs to. By default, dimensions are treated as non-periodic, but the UDO also accepts optional input parameters to specify periodicity in any of the array dimensions. The UDO requires four stages to completely label connected components. In the first stage, labels are computed for each subarray or chunk of the mask array in parallel across SciDB instances using the weighted quick union (WQU) with half-path compression algorithm. In the second stage, labels around chunk boundaries from the first stage are stored in a temporary SciDB array that is then replicated across all SciDB instances. Equivalences are resolved by again applying the WQU algorithm to these boundary labels. In the third stage, relabeling is done for each chunk using the resolved equivalences. In the fourth stage, the resolved labels, which so far are "flattened" coordinates of the original binary mask array, are renamed with sequential integers for legibility. The UDO is demonstrated on a 3-D mask of O(1011) elements, with O(108) foreground cells and O(106) connected components. The operator completes in 19 minutes using 84 SciDB instances.

  14. Arra: Tas::89 0227::Tas Recovery Act 100g Ftp: An Ultra-High Speed Data Transfer Service Over Next Generation 100 Gigabit Per Second Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    YU, DANTONG; Jin, Shudong

    2014-03-01

    Data-intensive applications, including high energy and nuclear physics, astrophysics, climate modeling, nano-scale materials science, genomics, and financing, are expected to generate exabytes of data over the coming years, which must be transferred, visualized, and analyzed by geographically distributed teams of users. High-performance network capabilities must be available to these users at the application level in a transparent, virtualized manner. Moreover, the application users must have the capability to move large datasets from local and remote locations across network environments to their home institutions. To solve these challenges, the main goal of our project is to design and evaluate high-performance datamore » transfer software to support various data-intensive applications. First, we have designed a middleware software that provides access to Remote Direct Memory Access (RDMA) functionalities. This middleware integrates network access, memory management and multitasking in its core design. We address a number of issues related to its efficient implementation, for instance, explicit buffer management and memory registration, and parallelization of RDMA operations, which are vital to delivering the benefit of RDMA to the applications. Built on top of this middleware, an implementation and experimental evaluation of the RDMA-based FTP software, RFTP, is described and evaluated. This application has been implemented by our team to exploit the full capabilities of advanced RDMA mechanisms for ultra-high speed bulk data transfer applications on Energy Sciences Network (ESnet). Second, we designed our data transfer software to optimize TCP/IP based data transfer performance such that RFTP can be fully compatible with today’s Internet. Our kernel optimization techniques with Linux system calls sendfile and splice, can reduce data copy cost. In this report, we summarize the technical challenges of our project, the primary software design methods, the major project milestones achieved, as well as the testbed evaluation work and demonstrations during our project life time.« less

  15. Efficient linear algebra routines for symmetric matrices stored in packed form.

    PubMed

    Ahlrichs, Reinhart; Tsereteli, Kakha

    2002-01-30

    Quantum chemistry methods require various linear algebra routines for symmetric matrices, for example, diagonalization or Cholesky decomposition for positive matrices. We present a small set of these basic routines that are efficient and minimize memory requirements.

  16. Attractor neural networks with resource-efficient synaptic connectivity

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  17. Exploring the effect of depressive symptoms and ageing on metamemory in an Italian adult sample.

    PubMed

    Fastame, Maria Chiara

    2014-01-01

    The current study aimed to investigate the effect of depression and age-related factors on metamemory measures in an Italian adult sample. Fifty-eight healthy participants were recruited in Northern Italy and were, respectively, assigned to the following groups: Young (20-30 years old), old (60-70 years old), and Very Old (71-84 years old). Participants were administered a battery of tests, including a word recall task, self-referent mnestic efficiency scales, general beliefs about memory, and depression measures. General beliefs about memory, self-efficacy, and beliefs about the control of personal memory were predicted by age, education, depression, and mnestic and cognitive efficiency. Finally, age-related differences were found in metamemory measures: the accuracy of mnestic control processes is thought to be lower by very old adults than by old and young individuals.

  18. Survival Processing Enhances Visual Search Efficiency.

    PubMed

    Cho, Kit W

    2018-05-01

    Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.

  19. Design of a Variational Multiscale Method for Turbulent Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.

  20. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    PubMed Central

    Stöckel, Andreas; Jenzen, Christoph; Thies, Michael; Rückert, Ulrich

    2017-01-01

    Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output. PMID:28878642

  1. Entanglement distillation for quantum communication network with atomic-ensemble memories.

    PubMed

    Li, Tao; Yang, Guo-Jian; Deng, Fu-Guo

    2014-10-06

    Atomic ensembles are effective memory nodes for quantum communication network due to the long coherence time and the collective enhancement effect for the nonlinear interaction between an ensemble and a photon. Here we investigate the possibility of achieving the entanglement distillation for nonlocal atomic ensembles by the input-output process of a single photon as a result of cavity quantum electrodynamics. We give an optimal entanglement concentration protocol (ECP) for two-atomic-ensemble systems in a partially entangled pure state with known parameters and an efficient ECP for the systems in an unknown partially entangled pure state with a nondestructive parity-check detector (PCD). For the systems in a mixed entangled state, we introduce an entanglement purification protocol with PCDs. These entanglement distillation protocols have high fidelity and efficiency with current experimental techniques, and they are useful for quantum communication network with atomic-ensemble memories.

  2. Electro-Optic Quantum Memory for Light Using Two-Level Atoms

    NASA Astrophysics Data System (ADS)

    Hétet, G.; Longdell, J. J.; Alexander, A. L.; Lam, P. K.; Sellars, M. J.

    2008-01-01

    We present a simple quantum memory scheme that allows for the storage of a light field in an ensemble of two-level atoms. The technique is analogous to the NMR gradient echo for which the imprinting and recalling of the input field are performed by controlling a linearly varying broadening. Our protocol is perfectly efficient in the limit of high optical depths and the output pulse is emitted in the forward direction. We provide a numerical analysis of the protocol together with an experiment performed in a solid state system. In close agreement with our model, the experiment shows a total efficiency of up to 15%, and a recall efficiency of 26%. We suggest simple realizable improvements for the experiment to surpass the no-cloning limit.

  3. Electro-optic quantum memory for light using two-level atoms.

    PubMed

    Hétet, G; Longdell, J J; Alexander, A L; Lam, P K; Sellars, M J

    2008-01-18

    We present a simple quantum memory scheme that allows for the storage of a light field in an ensemble of two-level atoms. The technique is analogous to the NMR gradient echo for which the imprinting and recalling of the input field are performed by controlling a linearly varying broadening. Our protocol is perfectly efficient in the limit of high optical depths and the output pulse is emitted in the forward direction. We provide a numerical analysis of the protocol together with an experiment performed in a solid state system. In close agreement with our model, the experiment shows a total efficiency of up to 15%, and a recall efficiency of 26%. We suggest simple realizable improvements for the experiment to surpass the no-cloning limit.

  4. Synthesis of energy-efficient FSMs implemented in PLD circuits

    NASA Astrophysics Data System (ADS)

    Nawrot, Radosław; Kulisz, Józef; Kania, Dariusz

    2017-11-01

    The paper presents an outline of a simple synthesis method of energy-efficient FSMs. The idea consists in using local clock gating to selectively block the clock signal, if no transition of a state of a memory element is required. The research was dedicated to logic circuits using Programmable Logic Devices as the implementation platform, but the conclusions can be applied to any synchronous circuit. The experimental section reports a comparison of three methods of implementing sequential circuits in PLDs with respect to clock distribution: the classical fully synchronous structure, the structure exploiting the Enable Clock inputs of memory elements, and the structure using clock gating. The results show that the approach based on clock gating is the most efficient one, and it leads to significant reduction of dynamic power consumed by the FSM.

  5. Studies and applications of NiTi shape memory alloys in the medical field in China.

    PubMed

    Dai, K; Chu, Y

    1996-01-01

    The biomedical study of NiTi shape memory alloys has been undertaken in China since 1978. A series of stimulating corrosion tests, histological observations, toxicity tests, carcinogenicity tests, trace nickel elements analysis and a number of clinical trials have been conducted. The results showed that the NiTi shape memory alloy is a good biomaterial with good biocompatibility and no obvious local tissue reaction, carcinogenesis or erosion of implants were found experimentally or clinically. In 1981, on the basis of fundamental studies, a shape memory staple was used for the first time inside the human body. Subsequently, various shape memory devices were designed and applied clinically for internal fixation of fractures, spine surgery, endoprostheses, gynaecological and craniofacial surgery. Since 1990, a series of internal stents have been developed for the management of biliary, tracheal and esophageal strictures and urethrostenosis as well as vascular obturator for tumour management. Several thousand cases have been treated and had a 1-10 year follow-up and good clinical results with a rather low complication rate were obtained.

  6. Things to come: postmodern digital knowledge management and medical informatics.

    PubMed

    Matheson, N W

    1995-01-01

    The overarching informatics grand challenge facing society is the creation of knowledge management systems that can acquire, conserve, organize, retrieve, display, and distribute what is known today in a manner that informs and educates, facilitates the discovery and creation of new knowledge, and contributes to the health and welfare of the planet. At one time the private, national, and university libraries of the world collectively constituted the memory of society's intellectual history. In the future, these new digital knowledge management systems will constitute human memory in its entirety. The current model of multiple local collections of duplicated resources will give way to specialized sole-source servers. In this new environment all scholarly scientific knowledge should be public domain knowledge: managed by scientists, organized for the advancement of knowledge, and readily available to all. Over the next decade, the challenge for the field of medical informatics and for the libraries that serve as the continuous memory for the biomedical sciences will be to come together to form a new organization that will lead to the development of postmodern digital knowledge management systems for medicine. These systems will form a portion of the evolving world brain of the 21st century.

  7. Chip architecture - A revolution brewing

    NASA Astrophysics Data System (ADS)

    Guterl, F.

    1983-07-01

    Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.

  8. An implicit scheme with memory reduction technique for steady state solutions of DVBE in all flow regimes

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.

    2018-04-01

    High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.

  9. LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.

    PubMed

    El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher

    2016-11-01

    The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Multiprocessor architecture: Synthesis and evaluation

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  11. Efficient Bayesian inference for natural time series using ARFIMA processes

    NASA Astrophysics Data System (ADS)

    Graves, Timothy; Gramacy, Robert; Franzke, Christian; Watkins, Nicholas

    2016-04-01

    Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. We present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators [1]. In addition we show how the method can be used to perform joint inference of the stability exponent and the memory parameter when ARFIMA is extended to allow for alpha-stable innovations. Such models can be used to study systems where heavy tails and long range memory coexist. [1] Graves et al, Nonlin. Processes Geophys., 22, 679-700, 2015; doi:10.5194/npg-22-679-2015.

  12. System level mechanisms of adaptation, learning, memory formation and evolvability: the role of chaperone and other networks.

    PubMed

    Gyurko, David M; Soti, Csaba; Stetak, Attila; Csermely, Peter

    2014-05-01

    During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides ' learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of young organisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of old organisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.

  13. Scientific developments of liquid crystal-based optical memory: a review

    NASA Astrophysics Data System (ADS)

    Prakash, Jai; Chandran, Achu; Biradar, Ashok M.

    2017-01-01

    The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.

  14. Scientific developments of liquid crystal-based optical memory: a review.

    PubMed

    Prakash, Jai; Chandran, Achu; Biradar, Ashok M

    2017-01-01

    The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.

  15. Optimizing SIEM Throughput on the Cloud Using Parallelization

    PubMed Central

    Alam, Masoom; Ihsan, Asif; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, M Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage. PMID:27851762

  16. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  17. Selecting Learning Tasks: Effects of Adaptation and Shared Control on Learning Efficiency and Task Involvement

    ERIC Educational Resources Information Center

    Corbalan, Gemma; Kester, Liesbeth; van Merrienboer, Jeroen J. G.

    2008-01-01

    Complex skill acquisition by performing authentic learning tasks is constrained by limited working memory capacity [Baddeley, A. D. (1992). Working memory. "Science, 255", 556-559]. To prevent cognitive overload, task difficulty and support of each newly selected learning task can be adapted to the learner's competence level and perceived task…

  18. Inhibition of Different Histone Acetyltransferases (HATs) Uncovers Transcription-Dependent and -Independent Acetylation-Mediated Mechanisms in Memory Formation

    ERIC Educational Resources Information Center

    Merschbaecher, Katja; Hatko, Lucyna; Folz, Jennifer; Mueller, Uli

    2016-01-01

    Acetylation of histones changes the efficiency of the transcription processes and thus contributes to the formation of long-term memory (LTM). In our comparative study, we used two inhibitors to characterize the contribution of different histone acetyl transferases (HATs) to appetitive associative learning in the honeybee. For one we applied…

  19. What Makes a Skilled Writer? Working Memory and Audience Awareness during Text Composition

    ERIC Educational Resources Information Center

    Alamargot, Denis; Caporossi, Gilles; Chesnet, David; Ros, Christine

    2011-01-01

    This study investigated the role of working memory capacity as a factor for individual differences in the ability to compose a text with communicative efficiency based on audience awareness. We analyzed its differential effects on the dynamics of the writing processes, as well as on the content of the finished product. Twenty-five graduate…

  20. Verbal Rehearsal and Short-Term Memory in Reading-disabled Children

    ERIC Educational Resources Information Center

    Torgesen, Joseph; Goldman, Tina

    1977-01-01

    To determine whether the frequently found short-term memory deficits in poor readers reflect a lack of ability or inclination to use efficient task strategies, the performances of second-grade good and poor readers were compared on a task which allowed direct observation of the use of verbal rehearsal as a mnemonic strategy. (Author/JMB)

  1. Low Working Memory Capacity Impedes both Efficiency and Learning of Number Transcoding in Children

    ERIC Educational Resources Information Center

    Camos, Valerie

    2008-01-01

    This study aimed to evaluate the impact of individual differences in working memory capacity on number transcoding. A recently proposed model, ADAPT (a developmental asemantic procedural transcoding model), accounts for the development of number transcoding from verbal form to Arabic form by two mechanisms: the learning of new production rules…

  2. Investigating Sentence Processing and Language Segmentation in Explaining Children's Performance on a Sentence-Span Task

    ERIC Educational Resources Information Center

    Mainela-Arnold, Elina; Misra, Maya; Miller, Carol; Poll, Gerard H.; Park, Ji Sook

    2012-01-01

    Background: Children with poor language abilities tend to perform poorly on verbal working memory tasks. This result has been interpreted as evidence that limitations in working memory capacity may interfere with the development of a mature linguistic system. However, it is possible that language abilities, such as the efficiency of sentence…

  3. A manual for PARTI runtime primitives

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel

    1990-01-01

    Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.

  4. Age-Related Differences in the Temporal Dynamics of Prospective Memory Retrieval: A Lifespan Approach

    ERIC Educational Resources Information Center

    Mattli, Florentina; Zollig, Jacqueline; West, Robert

    2011-01-01

    The efficiency of prospective memory (PM) typically increases from childhood to young adulthood and then decreases in later adulthood. The current study used event-related brain potentials (ERPs) to examine the development of the neural correlates of processes associated with the detection of a PM cue, switching from the ongoing activity to the…

  5. Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity.

    PubMed

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2016-02-01

    In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.

  6. Multilevel radiative thermal memory realized by the hysteretic metal-insulator transition of vanadium dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, Kota, E-mail: kotaito@mosk.tytlabs.co.jp; Nishikawa, Kazutaka; Iizuka, Hideo

    Thermal information processing is attracting much interest as an analog of electronic computing. We experimentally demonstrated a radiative thermal memory utilizing a phase change material. The hysteretic metal-insulator transition of vanadium dioxide (VO{sub 2}) allows us to obtain a multilevel memory. We developed a Preisach model to explain the hysteretic radiative heat transfer between a VO{sub 2} film and a fused quartz substrate. The transient response of our memory predicted by the Preisach model agrees well with the measured response. Our multilevel thermal memory paves the way for thermal information processing as well as contactless thermal management.

  7. In-memory interconnect protocol configuration registers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Kevin Y.; Roberts, David A.

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mappingmore » decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.« less

  8. Facing the future: Memory as an evolved system for planning future acts

    PubMed Central

    Klein, Stanley B.; Robertson, Theresa E.; Delton, Andrew W.

    2013-01-01

    All organisms capable of long-term memory are necessarily oriented toward the future. We propose that one of the most important adaptive functions of long-term episodic memory is to store information about the past in the service of planning for the personal future. Because a system should have especially efficient performance when engaged in a task that makes maximal use of its evolved machinery, we predicted that future-oriented planning would result in especially good memory relative to other memory tasks. We tested recall performance of a word list, using encoding tasks with different temporal perspectives (e.g., past, future) but a similar context. Consistent with our hypothesis, future-oriented encoding produced superior recall. We discuss these findings in light of their implications for the thesis that memory evolved to enable its possessor to anticipate and respond to future contingencies that cannot be known with certainty. PMID:19966234

  9. Memory Detection 2.0: The First Web-Based Memory Detection Test

    PubMed Central

    Kleinberg, Bennett; Verschuere, Bruno

    2015-01-01

    There is accumulating evidence that reaction times (RTs) can be used to detect recognition of critical (e.g., crime) information. A limitation of this research base is its reliance upon small samples (average n = 24), and indications of publication bias. To advance RT-based memory detection, we report upon the development of the first web-based memory detection test. Participants in this research (Study1: n = 255; Study2: n = 262) tried to hide 2 high salient (birthday, country of origin) and 2 low salient (favourite colour, favourite animal) autobiographical details. RTs allowed to detect concealed autobiographical information, and this, as predicted, more successfully so than error rates, and for high salient than for low salient items. While much remains to be learned, memory detection 2.0 seems to offer an interesting new platform to efficiently and validly conduct RT-based memory detection research. PMID:25874966

  10. Nonlinear analysis of an improved continuum model considering headway change with memory

    NASA Astrophysics Data System (ADS)

    Cheng, Rongjun; Wang, Jufeng; Ge, Hongxia; Li, Zhipeng

    2018-01-01

    Considering the effect of headway changes with memory, an improved continuum model of traffic flow is proposed in this paper. By means of linear stability theory, the new model’s linear stability with the effect of headway changes with memory is obtained. Through nonlinear analysis, the KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line. Numerical simulation is carried out to study the improved traffic flow model, which explores how the headway changes with memory affected each car’s velocity, density and energy consumption. Numerical results show that when considering the effects of headway changes with memory, the traffic jams can be suppressed efficiently. Furthermore, research results demonstrate that the effect of headway changes with memory can avoid the disadvantage of historical information, which will improve the stability of traffic flow and minimize car energy consumption.

  11. Combating Memory Corruption Attacks On Scada Devices

    NASA Astrophysics Data System (ADS)

    Bellettini, Carlo; Rrushi, Julian

    Memory corruption attacks on SCADA devices can cause significant disruptions to control systems and the industrial processes they operate. However, despite the presence of numerous memory corruption vulnerabilities, few, if any, techniques have been proposed for addressing the vulnerabilities or for combating memory corruption attacks. This paper describes a technique for defending against memory corruption attacks by enforcing logical boundaries between potentially hostile data and safe data in protected processes. The technique encrypts all input data using random keys; the encrypted data is stored in main memory and is decrypted according to the principle of least privilege just before it is processed by the CPU. The defensive technique affects the precision with which attackers can corrupt control data and pure data, protecting against code injection and arc injection attacks, and alleviating problems posed by the incomparability of mitigation techniques. An experimental evaluation involving the popular Modbus protocol demonstrates the feasibility and efficiency of the defensive technique.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less

  13. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  14. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    PubMed

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  15. Temporal context memory in high-functioning autism.

    PubMed

    Gras-Vincendon, Agnès; Mottron, Laurent; Salamé, Pierre; Bursztejn, Claude; Danion, Jean-Marie

    2007-11-01

    Episodic memory, i.e. memory for specific episodes situated in space and time, seems impaired in individuals with autism. According to weak central coherence theory, individuals with autism have general difficulty connecting contextual and item information which then impairs their capacity to memorize information in context. This study investigated temporal context memory for visual information in individuals with autism. Eighteen adolescents and adults with high-functioning autism (HFA) or Asperger syndrome (AS) and age- and IQ-matched typically developing participants were tested using a recency judgement task. The performance of the autistic group did not differ from that of the control group, nor did the performance between the AS and HFA groups. We conclude that autism in high-functioning individuals does not impair temporal context memory as assessed on this task. We suggest that individuals with autism are as efficient on this task as typically developing subjects because contextual memory performance here involves more automatic than organizational processing.

  16. The effect of strategic memory training in older adults: who benefits most?

    PubMed

    Rosi, Alessia; Del Signore, Federica; Canelli, Elisa; Allegri, Nicola; Bottiroli, Sara; Vecchi, Tomaso; Cavallini, Elena

    2017-12-07

    Previous research has suggested that there is a degree of variability among older adults' response to memory training, such that some individuals benefit more than others. The aim of the present study was to identify the profile of older adults who were likely to benefit most from a strategic memory training program that has previously proved to be effective in improving memory in healthy older adults. In total, 44 older adults (60-83 years) participated in a strategic memory training. We examined memory training benefits by measuring changes in memory practiced (word list learning) and non-practiced tasks (grocery list and associative learning). In addition, a battery of cognitive measures was administered in order to assess crystallized and fluid abilities, short-term memory, working memory, and processing speed. Results confirmed the efficacy of the training in improving performance in both practiced and non-practiced memory tasks. For the practiced memory tasks, results showed that memory baseline performance and crystallized ability predicted training gains. For the non-practiced memory tasks, analyses showed that memory baseline performance was a significant predictor of gain in the grocery list learning task. For the associative learning task, the significant predictors were memory baseline performance, processing speed, and marginally the age. Our results indicate that older adults with a higher baseline memory capacity and with more efficient cognitive resources were those who tended to benefit most from the training. The present study provides new avenues in designing personalized intervention according to the older adults' cognitive profile.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Bender, S.R.

    Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are firstmore » presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.« less

  18. More Efficient e-Learning through Design: Color of Text and Background

    ERIC Educational Resources Information Center

    Zufic, Janko; Kalpic, Damir

    2009-01-01

    Background: The area of research aimed for a more efficient e-learning is slowly widening from purely technical to the areas of psychology, didactics and methodology. The question is whether the text or background color influence the efficiency of memory, i.e. learning. If the answer to that question is positive, then another question arises which…

  19. Hippocampal gamma-band Synchrony and pupillary responses index memory during visual search.

    PubMed

    Montefusco-Siegmund, Rodrigo; Leonard, Timothy K; Hoffman, Kari L

    2017-04-01

    Memory for scenes is supported by the hippocampus, among other interconnected structures, but the neural mechanisms related to this process are not well understood. To assess the role of the hippocampus in memory-guided scene search, we recorded local field potentials and multiunit activity from the hippocampus of macaques as they performed goal-directed search tasks using natural scenes. We additionally measured pupil size during scene presentation, which in humans is modulated by recognition memory. We found that both pupil dilation and search efficiency accompanied scene repetition, thereby indicating memory for scenes. Neural correlates included a brief increase in hippocampal multiunit activity and a sustained synchronization of unit activity to gamma band oscillations (50-70 Hz). The repetition effects on hippocampal gamma synchronization occurred when pupils were most dilated, suggesting an interaction between aroused, attentive processing and hippocampal correlates of recognition memory. These results suggest that the hippocampus may support memory-guided visual search through enhanced local gamma synchrony. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. In search of memory tests equivalent for experiments on animals and humans.

    PubMed

    Brodziak, Andrzej; Kołat, Estera; Różyk-Myrta, Alicja

    2014-12-19

    Older people often exhibit memory impairments. Contemporary demographic trends cause aging of the society. In this situation, it is important to conduct clinical trials of drugs and use training methods to improve memory capacity. Development of new memory tests requires experiments on animals and then clinical trials in humans. Therefore, we decided to review the assessment methods and search for tests that evaluate analogous cognitive processes in animals and humans. This review has enabled us to propose 2 pairs of tests of the efficiency of working memory capacity in animals and humans. We propose a basic set of methods for complex clinical trials of drugs and training methods to improve memory, consisting of 2 pairs of tests: 1) the Novel Object Recognition Test - Sternberg Item Recognition Test and 2) the Object-Location Test - Visuospatial Memory Test. We postulate that further investigations of methods that are equivalent in animals experiments and observations performed on humans are necessary.

Top