Sample records for memory management techniques

  1. Extended memory management under RTOS

    NASA Technical Reports Server (NTRS)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  2. Cognitive Rehabilitation of Episodic Memory Disorders: From Theory to Practice

    PubMed Central

    Ptak, Radek; der Linden, Martial Van; Schnider, Armin

    2010-01-01

    Memory disorders are among the most frequent and most debilitating cognitive impairments following acquired brain damage. Cognitive remediation strategies attempt to restore lost memory capacity, provide compensatory techniques or teach the use of external memory aids. Memory rehabilitation has strongly been influenced by memory theory, and the interaction between both has stimulated the development of techniques such as spaced retrieval, vanishing cues or errorless learning. These techniques partly rely on implicit memory and therefore enable even patients with dense amnesia to acquire new information. However, knowledge acquired in this way is often strongly domain-specific and inflexible. In addition, individual patients with amnesia respond differently to distinct interventions. The factors underlying these differences have not yet been identified. Behavioral management of memory failures therefore often relies on a careful description of environmental factors and measurement of associated behavioral disorders such as unawareness of memory failures. The current evidence suggests that patients with less severe disorders benefit from self-management techniques and mnemonics whereas rehabilitation of severely amnesic patients should focus on behavior management, the transmission of domain-specific knowledge through implicit memory processes and the compensation for memory deficits with memory aids. PMID:20700383

  3. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  4. Healthcare knowledge management through building and operationalising healthcare enterprise memory.

    PubMed

    Cheah, Y N; Abidi, S S

    1999-01-01

    In this paper we suggest that the healthcare enterprise needs to be more conscious of its vast knowledge resources vis-à-vis the exploitation of knowledge management techniques to efficiently manage its knowledge. The development of healthcare enterprise memory is suggested as a solution, together with a novel approach advocating the operationalisation of healthcare enterprise memories leading to the modelling of healthcare processes for strategic planning. As an example, we present a simulation of Service Delivery Time in a hospital's OPD.

  5. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  6. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  7. Memory Compression Techniques for Network Address Management in MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yanfei; Archer, Charles J.; Blocksome, Michael

    MPI allows applications to treat processes as a logical collection of integer ranks for each MPI communicator, while internally translating these logical ranks into actual network addresses. In current MPI implementations the management and lookup of such network addresses use memory sizes that are proportional to the number of processes in each communicator. In this paper, we propose a new mechanism, called AV-Rankmap, for managing such translation. AV-Rankmap takes advantage of logical patterns in rank-address mapping that most applications naturally tend to have, and it exploits the fact that some parts of network address structures are naturally more performance criticalmore » than others. It uses this information to compress the memory used for network address management. We demonstrate that AV-Rankmap can achieve performance similar to or better than that of other MPI implementations while using significantly less memory.« less

  8. Survey State of the Art: Electrical Load Management Techniques and Equipment.

    DTIC Science & Technology

    1986-10-31

    automobiles and even appliances. Applications in the area of demand and energy management have been multifaceted, given the needs involved and rapid paybacks...copy of the programming to be reloaded into the controller at any time and by designing this module with erasable and reprogrammable memory, the...points and performs DDC programming is stored in (direct digital control) of output reprogrammable , permanent memory points. A RIM may accommodate up

  9. Memory Forensics: Review of Acquisition and Analysis Techniques

    DTIC Science & Technology

    2013-11-01

    Management Overview Processes running on modern multitasking operating systems operate on an abstraction of RAM, called virtual memory [7]. In these systems...information such as user names, email addresses and passwords [7]. Analysts also use tools such as WinHex to identify headers or other suspicious data within

  10. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  11. Temperature and leakage aware techniques to improve cache reliability

    NASA Astrophysics Data System (ADS)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data stored in the cache to reduce power consumption. The initial work done on this subject focuses on the type of data that increases leakage consumption and ways to manage without impacting the performance of the microprocessor. The second phase of the project focuses on managing the data storage in different blocks of the cache to smooth the leakage power as well as dynamic power consumption. The last technique is a voltage controlled cache to reduce the leakage consumption of the cache while in execution and even in idle state. Two blocks of the 4-way set associative cache go through a voltage regulator before getting to the voltage well, and the other two are directly connected to the voltage well. The idea behind this technique is to use the replacement algorithm information to increase or decrease voltage of the two blocks depending on the need of the information stored on them.

  12. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  13. Next Generation Mass Memory Architecture

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Stahle, M.; Lonsdorfer, U.; Binzer, N.

    2010-08-01

    Future Mass Memory units will have to cope with various demanding requirements driven by onboard instruments (optical and SAR) that generate a huge amount of data (>10TBit) at a data rate > 6 Gbps. For downlink data rates around 3 Gbps will be feasible using latest ka-band technology together with Variable Coding and Modulation (VCM) techniques. These high data rates and storage capacities need to be effectively managed. Therefore, data structures and data management functions have to be improved and adapted to existing standards like the Packet Utilisation Standard (PUS). In this paper we will present a highly modular and scalable architectural approach for mass memories in order to support a wide range of mission requirements.

  14. Operating Spin Echo in the Quantum Regime for an Atomic-Ensemble Quantum Memory

    NASA Astrophysics Data System (ADS)

    Rui, Jun; Jiang, Yan; Yang, Sheng-Jun; Zhao, Bo; Bao, Xiao-Hui; Pan, Jian-Wei

    2015-09-01

    Spin echo is a powerful technique to extend atomic or nuclear coherence times by overcoming the dephasing due to inhomogeneous broadenings. However, there are disputes about the feasibility of applying this technique to an ensemble-based quantum memory at the single-quanta level. In this experimental study, we find that noise due to imperfections of the rephasing pulses has both intense superradiant and weak isotropic parts. By properly arranging the beam directions and optimizing the pulse fidelities, we successfully manage to operate the spin echo technique in the quantum regime by observing nonclassical photon-photon correlations as well as the quantum behavior of retrieved photons. Our work for the first time demonstrates the feasibility of harnessing the spin echo method to extend the lifetime of ensemble-based quantum memories at the single-quanta level.

  15. Influence of personality and neuropsychological ability on social functioning and self-management in bipolar disorder.

    PubMed

    Vierck, Esther; Joyce, Peter R

    2015-10-30

    A majority of bipolar patients (BD) show functional difficulties even in remission. In recent years cognitive functions and personality characteristics have been associated with occupational and psychosocial outcomes, but findings are not consistent. We assessed personality and cognitive functioning through a range of tests in BD and control participants. Three cognitive domains-verbal memory, facial-executive, and spatial memory-were extracted by principal component analysis. These factors and selected personality dimensions were included in hierarchical regression analysis to predict psychosocial functioning and the use of self-management strategies while controlling for mood status. The best determinants of good psychosocial functioning were good verbal memory and high self-directedness. The use of self-management techniques was associated with a low level of harm-avoidance. Our findings indicate that strategies to improve memory and self-directedness may be useful for increasing functioning in individuals with bipolar disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Memory Management of Multimedia Services in Smart Homes

    NASA Astrophysics Data System (ADS)

    Kamel, Ibrahim; Muhaureq, Sanaa A.

    Nowadays there is a wide spectrum of applications that run in smart home environments. Consequently, home gateway, which is a central component in the smart home, must manage many applications despite limited memory resources. OSGi is a middleware standard for home gateways. OSGi models services as dependent components. Moreover, these applications might differ in their importance. Services collaborate and complement each other to achieve the required results. This paper addresses the following problem: given a home gateway that hosts several applications with different priorities and arbitrary dependencies among them. When the gateway runs out of memory, which application or service will be stopped or kicked out of memory to start a new service. Note that stopping a given service means that all the services that depend on it will be stopped too. Because of the service dependencies, traditional memory management techniques, in the operating system literatures might not be efficient. Our goal is to stop the least important and the least number of services. The paper presents a novel algorithm for home gateway memory management. The proposed algorithm takes into consideration the priority of the application and dependencies between different services, in addition to the amount of memory occupied by each service. We implement the proposed algorithm and performed many experiments to evaluate its performance and execution time. The proposed algorithm is implemented as a part of the OSGi framework (Open Service Gateway initiative). We used best fit and worst fit as yardstick to show the effectiveness of the proposed algorithm.

  17. Chip architecture - A revolution brewing

    NASA Astrophysics Data System (ADS)

    Guterl, F.

    1983-07-01

    Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.

  18. Nondirective meditation activates default mode network and areas associated with memory retrieval and emotional processing.

    PubMed

    Xu, Jian; Vik, Alexandra; Groote, Inge R; Lagopoulos, Jim; Holen, Are; Ellingsen, Oyvind; Håberg, Asta K; Davanger, Svend

    2014-01-01

    Nondirective meditation techniques are practiced with a relaxed focus of attention that permits spontaneously occurring thoughts, images, sensations, memories, and emotions to emerge and pass freely, without any expectation that mind wandering should abate. These techniques are thought to facilitate mental processing of emotional experiences, thereby contributing to wellness and stress management. The present study assessed brain activity by functional magnetic resonance imaging (fMRI) in 14 experienced practitioners of Acem meditation in two experimental conditions. In the first, nondirective meditation was compared to rest. Significantly increased activity was detected in areas associated with attention, mind wandering, retrieval of episodic memories, and emotional processing. In the second condition, participants carried out concentrative practicing of the same meditation technique, actively trying to avoid mind wandering. The contrast nondirective meditation > concentrative practicing was characterized by higher activity in the right medial temporal lobe (parahippocampal gyrus and amygdala). In conclusion, the present results support the notion that nondirective meditation, which permits mind wandering, involves more extensive activation of brain areas associated with episodic memories and emotional processing, than during concentrative practicing or regular rest.

  19. Nondirective meditation activates default mode network and areas associated with memory retrieval and emotional processing

    PubMed Central

    Xu, Jian; Vik, Alexandra; Groote, Inge R.; Lagopoulos, Jim; Holen, Are; Ellingsen, Øyvind; Håberg, Asta K.; Davanger, Svend

    2014-01-01

    Nondirective meditation techniques are practiced with a relaxed focus of attention that permits spontaneously occurring thoughts, images, sensations, memories, and emotions to emerge and pass freely, without any expectation that mind wandering should abate. These techniques are thought to facilitate mental processing of emotional experiences, thereby contributing to wellness and stress management. The present study assessed brain activity by functional magnetic resonance imaging (fMRI) in 14 experienced practitioners of Acem meditation in two experimental conditions. In the first, nondirective meditation was compared to rest. Significantly increased activity was detected in areas associated with attention, mind wandering, retrieval of episodic memories, and emotional processing. In the second condition, participants carried out concentrative practicing of the same meditation technique, actively trying to avoid mind wandering. The contrast nondirective meditation > concentrative practicing was characterized by higher activity in the right medial temporal lobe (parahippocampal gyrus and amygdala). In conclusion, the present results support the notion that nondirective meditation, which permits mind wandering, involves more extensive activation of brain areas associated with episodic memories and emotional processing, than during concentrative practicing or regular rest. PMID:24616684

  20. Overcoming learning barriers through knowledge management.

    PubMed

    Dror, Itiel E; Makany, Tamas; Kemp, Jonathan

    2011-02-01

    The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.

  1. The impact of corporate memory loss: What happens when a senior executive leaves?

    PubMed

    Lahaie, Denis

    2005-01-01

    The author is a nursing management practitioner, whose purpose in writing this paper is twofold: to examine the impact of corporate memory loss on a health care institution, caused by increasing retirement rates of senior executives; and to use this research as an opportunity for action learning where both the author and the institution can benefit from the learning outcomes. Using qualitative research methods based on ethnographic interviewing techniques and grounded theory, the author interviews 12 senior executives from four diverse health care facilities. The purpose is to determine the point at which corporate memory loss, in the form of tacit knowledge in the heads of departing executives, becomes a problem for the institution. The research determined that the requisite managerial competencies normally assumed for senior management positions are insufficient to minimize the negative impacts of corporate memory loss caused by departing senior executives. Effective knowledge management and knowledge transfer within the organization are fundamental for ongoing organizational effectiveness. The research is limited to 12 senior executives. The grounded theory nature of the research provides a framework for more research in other institutions to test and further explore some of the findings. One of the most significant threats facing the majority of health care organizations related to the aging workforce is the greater number of staff who are retiring from all levels within the organization. The development of techniques to reducing the impact of corporate memory loss on the culture of an organization will increase its effectiveness, help build continuity, and provide a more secure footing for the workforce of the future. The exit of knowledge workers is causing a major problem for Canada's health care organizations. This study throws more light on to this problem from the point of view of senior executives who have been specifically impacted by the problem of corporate memory loss.

  2. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  3. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  4. Architectural Techniques For Managing Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    As chip power dissipation becomes a critical challenge in scaling processor performance, computer architects are forced to fundamentally rethink the design of modern processors and hence, the chip-design industry is now at a major inflection point in its hardware roadmap. The high leakage power and low density of SRAM poses serious obstacles in its use for designing large on-chip caches and for this reason, researchers are exploring non-volatile memory (NVM) devices, such as spin torque transfer RAM, phase change RAM and resistive RAM. However, since NVMs are not strictly superior to SRAM, effective architectural techniques are required for making themmore » a universal memory solution. This book discusses techniques for designing processor caches using NVM devices. It presents algorithms and architectures for improving their energy efficiency, performance and lifetime. It also provides both qualitative and quantitative evaluation to help the reader gain insights and motivate them to explore further. This book will be highly useful for beginners as well as veterans in computer architecture, chip designers, product managers and technical marketing professionals.« less

  5. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  6. Intellectual Self-Management in Old Age.

    ERIC Educational Resources Information Center

    Skinner, B.F.

    1983-01-01

    Holds that as people get older they can employ certain techniques to offset some of the physiological limitations on their intellectual abilities. Provides tips for overcoming some sensory deficiencies, memory loss, motivational changes, mental fatigue, and changes in social environment of the old. (Author/AOS)

  7. A microprocessor card software server to support the Quebec health microprocessor card project.

    PubMed

    Durant, P; Bérubé, J; Lavoie, G; Gamache, A; Ardouin, P; Papillon, M J; Fortin, J P

    1995-01-01

    The Quebec Health Smart Card Project is advocating the use of a memory card software server[1] (SCAM) to implement a portable medical record (PMR) on a smart card. The PMR is viewed as an object that can be manipulated by SCAM's services. In fact, we can talk about a pseudo-object-oriented approach. This software architecture provides a flexible and evolutive way to manage and optimize the PMR. SCAM is a generic software server; it can manage smart cards as well as optical (laser) cards or other types of memory cards. But, in the specific case of the Quebec Health Card Project, SCAM is used to provide services between physicians' or pharmacists' software and IBM smart card technology. We propose to expose the concepts and techniques used to provide a generic environment to deal with smart cards (and more generally with memory cards), to obtain a dynamic an evolutive PMR, to raise the system global security level and the data integrity, to optimize significantly the management of the PMR, and to provide statistic information about the use of the PMR.

  8. Ruggedized minicomputer hardware and software topics, 1981: Proceedings of the 4th ROLM MIL-SPEC Computer User's Group Conference

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Presentations of a conference on the use of ruggedized minicomputers are summarized. The following topics are discussed: (1) the role of minicomputers in the development and/or certification of commercial or military airplanes in both the United States and Europe; (2) generalized software error detection techniques; (3) real time software development tools; (4) a redundancy management research tool for aircraft navigation/flight control sensors; (5) extended memory management techniques using a high order language; and (6) some comments on establishing a system maintenance scheme. Copies of presentation slides are also included.

  9. Don’t make cache too complex: A simple probability-based cache management scheme for SSDs

    PubMed Central

    Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897

  10. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    PubMed

    Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  11. Knowledge management in healthcare: towards 'knowledge-driven' decision-support services.

    PubMed

    Abidi, S S

    2001-09-01

    In this paper, we highlight the involvement of Knowledge Management in a healthcare enterprise. We argue that the 'knowledge quotient' of a healthcare enterprise can be enhanced by procuring diverse facets of knowledge from the seemingly placid healthcare data repositories, and subsequently operationalising the procured knowledge to derive a suite of Strategic Healthcare Decision-Support Services that can impact strategic decision-making, planning and management of the healthcare enterprise. In this paper, we firstly present a reference Knowledge Management environment-a Healthcare Enterprise Memory-with the functionality to acquire, share and operationalise the various modalities of healthcare knowledge. Next, we present the functional and architectural specification of a Strategic Healthcare Decision-Support Services Info-structure, which effectuates a synergy between knowledge procurement (vis-à-vis Data Mining) and knowledge operationalisation (vis-à-vis Knowledge Management) techniques to generate a suite of strategic knowledge-driven decision-support services. In conclusion, we argue that the proposed Healthcare Enterprise Memory is an attempt to rethink the possible sources of leverage to improve healthcare delivery, hereby providing a valuable strategic planning and management resource to healthcare policy makers.

  12. Mobile Thread Task Manager

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.

    2013-01-01

    The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.

  13. Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Llames, Rene Lim

    1991-01-01

    Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.

  14. The Redemptive Role of the Community College within a Rural Society.

    ERIC Educational Resources Information Center

    Brandon, Judy

    The Academic Career Studies (ACS) program at Clovis Community College focuses on time management, concentration and memory techniques, effective reading and note taking, test-taking skills, creativity, and the qualities of people who are "masters" at what they do. The program targets students who are working on their General Educational…

  15. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  16. Flash memory management system and method utilizing multiple block list windows

    NASA Technical Reports Server (NTRS)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  17. VOP memory management in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2001-03-01

    MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.

  18. Multipurpose panel, phase 1, study report. [display utilizing multiplexing and digital techniques

    NASA Technical Reports Server (NTRS)

    Parkin, W.

    1975-01-01

    The feasibility of a multipurpose panel which provides a programmable electronic display for changeable panel nomenclature, multiplexes similar indicator display signals to the signal display, and demultiplexes command signals is examined. Topics discussed include: electronic display technology, miniaturized electronic and memory devices, and data management systems which employ digital address and multiplexing.

  19. The effectiveness of interventions in supporting self-management of informal caregivers of people with dementia; a systematic meta review.

    PubMed

    Huis In Het Veld, Judith G; Verkaik, Renate; Mistiaen, Patriek; van Meijel, Berno; Francke, Anneke L

    2015-11-11

    Informal caregivers of people with dementia are challenged in managing the consequences of dementia in daily life. The objective of this meta-review was to synthesize evidence from previous systematic reviews about professional self-management support interventions for this group. In March 2014, searches were conducted in PubMed, CINAHL, Cochrane Library, Embase and PsycINFO. The PRISMA Statement was followed. Interventions were grouped using Martin's targets of self-management, covering 5 targets: relationship with family, maintaining an active lifestyle, psychological wellbeing, techniques to cope with memory changes and information about dementia. Using an evidence synthesis, the outcomes from the included interventions were synthesized and conclusions were drawn about the level of evidence for the effectiveness of interventions within each target. Ten high-quality systematic reviews were selected. Evidence exists for the effectiveness of professional self-management support interventions targeting psychological wellbeing on stress and social outcomes of informal caregivers. In addition, evidence exists for the effectiveness of interventions targeting information on ability/knowledge. Limited evidence was found for the effectiveness of interventions targeting techniques to cope with memory change on coping skills and mood, and for interventions targeting information on the outcomes sense of competence and decision-making confidence of informal caregivers. Scientific evidence exists for the effectiveness of a number of professional self-management support interventions targeting psychological wellbeing and information. Health care professionals could take account of the fact that psycho-education was integrated in most of the self-management support interventions that were found to be effective in this meta-review. Furthermore, longer and more intensive interventions were associated with greater effects.

  20. A processing architecture for associative short-term memory in electronic noses

    NASA Astrophysics Data System (ADS)

    Pioggia, G.; Ferro, M.; Di Francesco, F.; DeRossi, D.

    2006-11-01

    Electronic nose (e-nose) architectures usually consist of several modules that process various tasks such as control, data acquisition, data filtering, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, and fuzzy rules used to implement such tasks may lead to issues concerning module interconnection and cooperation. Moreover, a new learning phase is mandatory once new measurements have been added to the dataset, thus causing changes in the previously derived model. Consequently, if a loss in the previous learning occurs (catastrophic interference), real-time applications of e-noses are limited. To overcome these problems this paper presents an architecture for dynamic and efficient management of multi-transducer data processing techniques and for saving an associative short-term memory of the previously learned model. The architecture implements an artificial model of a hippocampus-based working memory, enabling the system to be ready for real-time applications. Starting from the base models available in the architecture core, dedicated models for neurons, maps and connections were tailored to an artificial olfactory system devoted to analysing olive oil. In order to verify the ability of the processing architecture in associative and short-term memory, a paired-associate learning test was applied. The avoidance of catastrophic interference was observed.

  1. Combating Memory Corruption Attacks On Scada Devices

    NASA Astrophysics Data System (ADS)

    Bellettini, Carlo; Rrushi, Julian

    Memory corruption attacks on SCADA devices can cause significant disruptions to control systems and the industrial processes they operate. However, despite the presence of numerous memory corruption vulnerabilities, few, if any, techniques have been proposed for addressing the vulnerabilities or for combating memory corruption attacks. This paper describes a technique for defending against memory corruption attacks by enforcing logical boundaries between potentially hostile data and safe data in protected processes. The technique encrypts all input data using random keys; the encrypted data is stored in main memory and is decrypted according to the principle of least privilege just before it is processed by the CPU. The defensive technique affects the precision with which attackers can corrupt control data and pure data, protecting against code injection and arc injection attacks, and alleviating problems posed by the incomparability of mitigation techniques. An experimental evaluation involving the popular Modbus protocol demonstrates the feasibility and efficiency of the defensive technique.

  2. Associative Memory Synthesis, Performance, Storage Capacity And Updating: New Heteroassociative Memory Results

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-02-01

    The storage capacity, noise performance, and synthesis of associative memories for image analysis are considered. Associative memory synthesis is shown to be very similar to that of linear discriminant functions used in pattern recognition. These lead to new associative memories and new associative memory synthesis and recollection vector encodings. Heteroassociative memories are emphasized in this paper, rather than autoassociative memories, since heteroassociative memories provide scene analysis decisions, rather than merely enhanced output images. The analysis of heteroassociative memories has been given little attention. Heteroassociative memory performance and storage capacity are shown to be quite different from those of autoassociative memories, with much more dependence on the recollection vectors used and less dependence on M/N. This allows several different and preferable synthesis techniques to be considered for associative memories. These new associative memory synthesis techniques and new techniques to update associative memories are included. We also introduce a new SNR performance measure that is preferable to conventional noise standard deviation ratios.

  3. Operational Exercise Integration Recommendations for DoD Cyber Ranges

    DTIC Science & Technology

    2015-08-05

    be the precision and recall of a security information and event management (SIEM) system ’s notifications of unauthorized access to that directory...network traffic, port scanning Deplete Resources TCP flooding, memory leak exploitation Injection Cross-site scripting attacks, SQL injection Deceptive...requirements for personnel development; tactics, techniques, and procedures (TTPs) devel- opment; and mission rehearsals . While unique in their own

  4. Managing Chemotherapy Side Effects: Memory Changes

    MedlinePlus

    ... C ancer I nstitute Managing Chemotherapy Side Effects Memory Changes What is causing these changes? Your doctor ... thinking or remembering things Managing Chemotherapy Side Effects: Memory Changes Get help to remember things. Write down ...

  5. Everyday memory strategies for medication adherence.

    PubMed

    Boron, Julie Blaskewicz; Rogers, Wendy A; Fisk, Arthur D

    2013-01-01

    The need to manage chronic diseases and multiple medications increases for many older adults. Older adults are aware of memory declines and incorporate compensatory techniques. Everyday memory strategies used to support medication adherence were investigated. A survey distributed to 2000 households in the Atlanta metropolitan area yielded a 19.9% response rate including 354 older adults, aged 60-80 years. Older adults reported forgetting to take their medications, more so as their activity deviated from normal routines, such as unexpected activities. The majority of older adults endorsed at least two compensatory strategies, which they perceived to be more helpful in normal routines. Compensatory strategies were associated with higher education, more medications, having concern, and self-efficacy to take medications. As memory changes, older adults rely on multiple cues, and perceive reliance on multiple cues to be helpful. These data have implications for the design and successful implementation of medication reminder systems and interventions. Copyright © 2013 Mosby, Inc. All rights reserved.

  6. Exploring Childhood Memories with Adult Survivors of Sexual Abuse: Concrete Reconstruction and Visualization Techniques.

    ERIC Educational Resources Information Center

    Roland, Catherine B.

    1993-01-01

    Describes two memory-enhancing techniques, visualization and concrete reconstruction, that have been successful in counseling adult survivors of sexual abuse. Includes suggested implementations, case examples, and implications for incorporating memory techniques into counseling process. Describes various risk factors involved in using these…

  7. Network resiliency through memory health monitoring and proactive management

    DOEpatents

    Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-11-21

    A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.

  8. Cognitive rehabilitation of amnesia after virus encephalitis: a case report.

    PubMed

    Miotto, Eliane Correa

    2007-01-01

    A number of memory rehabilitation techniques have targeted people with various degrees of memory impairments. However, few studies have shown the contribution of preserved non-declarative memory capacity and errorless learning in the treatment of amnesic patients. The current case report describes the memory rehabilitation of a 44-year-old man with amnesia following viral encephalitis. The patient's procedural memory capacity had an important role in the use of a motor imagery strategy to remember people's names. It was further demonstrated that the application of a verbal learning technique was helpful in recalling new verbal information. These different memory rehabilitation techniques are discussed in terms of alternative possibilities in the rehabilitation of amnesic patients.

  9. Adaptive mesh refinement for characteristic grids

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    2011-05-01

    I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.

  10. Debates to personal conclusion in peripheral nerve injury and reconstruction: A 30-year experience at Chang Gung Memorial Hospital

    PubMed Central

    Chuang, David Chwei-Chin

    2016-01-01

    Significant progress has been achieved in the science and management of peripheral nerve injuries over the past 40 years. Yet there are many questions and few answers. The author, with 30 years of experience in treating them at the Chang Gung Memorial Hospital, addresses debates on various issues with personal conclusions. These include: (1) Degree of peripheral nerve injury, (2) Timing of nerve repair, (3)Technique of nerve repair, (4) Level of brachial plexus injury,(5) Level of radial nerve injury,(6) Traction avulsion amputation of major limb, (7) Proximal Vs distal nerve transfers in brachial plexus injuries and (8) Post paralysis facial synkinesis. PMID:27833273

  11. Memory management in genome-wide association studies

    PubMed Central

    2009-01-01

    Genome-wide association is a powerful tool for the identification of genes that underlie common diseases. Genome-wide association studies generate billions of genotypes and pose significant computational challenges for most users including limited computer memory. We applied a recently developed memory management tool to two analyses of North American Rheumatoid Arthritis Consortium studies and measured the performance in terms of central processing unit and memory usage. We conclude that our memory management approach is simple, efficient, and effective for genome-wide association studies. PMID:20018047

  12. Prospective memory in schizophrenia: relationship to medication management skills, neurocognition, and symptoms in individuals with schizophrenia.

    PubMed

    Raskin, Sarah A; Maye, Jacqueline; Rogers, Alexandra; Correll, David; Zamroziewicz, Marta; Kurtz, Matthew

    2014-05-01

    Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. In this Study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition, and symptoms. Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs.

  13. A qualitative study on personal information management (PIM) in clinical and basic sciences faculty members of a medical university in Iran

    PubMed Central

    Sedghi, Shahram; Abdolahi, Nida; Azimi, Ali; Tahamtan, Iman; Abdollahi, Leila

    2015-01-01

    Background: Personal Information Management (PIM) refers to the tools and activities to save and retrieve personal information for future uses. This study examined the PIM activities of faculty members of Iran University of Medical Sciences (IUMS) regarding their preferred PIM tools and four aspects of acquiring, organizing, storing and retrieving personal information. Methods: The qualitative design was based on phenomenology approach and we carried out 37 interviews with clinical and basic sciences faculty members of IUMS in 2014. The participants were selected using a random sampling method. All interviews were recorded by a digital voice recorder, and then transcribed, codified and finally analyzed using NVivo 8 software. Results: The use of PIM electronic tools (e-tools) was below expectation among the studied sample and just 37% had reasonable knowledge of PIM e-tools such as, external hard drivers, flash memories etc. However, all participants used both paper and electronic devices to store and access information. Internal mass memories (in Laptops) and flash memories were the most used e-tools to save information. Most participants used "subject" (41.00%) and "file name" (33.7 %) to save, organize and retrieve their stored information. Most users preferred paper-based rather than electronic tools to keep their personal information. Conclusion: Faculty members had little knowledge about PIM techniques and tools. Those who organized personal information could easier retrieve the stored information for future uses. Enhancing familiarity with PIM tools and training courses of PIM tools and techniques are suggested. PMID:26793648

  14. Hard Real-Time: C++ Versus RTSJ

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Reinholtz, William K.

    2004-01-01

    In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.

  15. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  16. Martin Mayman's early memories technique: bridging the gap between personality assessment and psychotherapy.

    PubMed

    Fowler, J C; Hilsenroth, M J; Handler, L

    2000-08-01

    In this article, we describe Martin Mayman's approach to early childhood memories as a projective technique, beginning with his scientific interest in learning theory, coupled with his interest in ego psychology and object relations theory. We describe Mayman's contributions to the use of the early memories technique to inform the psychotherapy process, tying assessment closely to psychotherapy and making assessment more useful in treatment. In this article, we describe a representative sample of research studies that demonstrate the reliability and validity of early memories, followed by case examples in which the early memories informed the therapy process, including issues of transference and countertransference.

  17. Fast Initialization of Bubble-Memory Systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1986-01-01

    Improved scheme several orders of magnitude faster than normal initialization scheme. State-of-the-art commercial bubble-memory device used. Hardware interface designed connects controlling microprocessor to bubblememory circuitry. System software written to exercise various functions of bubble-memory system in comparison made between normal and fast techniques. Future implementations of approach utilize E2PROM (electrically-erasable programable read-only memory) to provide greater system flexibility. Fastinitialization technique applicable to all bubble-memory devices.

  18. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  19. Expert system shell to reason on large amounts of data

    NASA Technical Reports Server (NTRS)

    Giuffrida, Gionanni

    1994-01-01

    The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.

  20. Prospective memory in schizophrenia: Relationship to medication management skills, neurocognition and symptoms in individuals with schizophrenia

    PubMed Central

    Raskin, S.; Maye, J.; Rogers, A.; Correll, D.; Zamroziewicz, M.; Kurtz, M.

    2014-01-01

    Objective Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. Methods In this study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition and symptoms. Results Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. Conclusions This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs. PMID:24188118

  1. Radiation-Hardened Solid-State Drive

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.

    2010-01-01

    A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.

  2. Qualitative evaluation of a self-management intervention for people in the early stage of dementia.

    PubMed

    Martin, Faith; Turner, Andrew; Wallace, Louise M; Stanley, Damian; Jesuthasan, Jana; Bradbury, Nicola

    2015-07-01

    Self-management programs are effective for people living with chronic illnesses. However, there has been little research addressing self-management for people with dementia in the early stages. This study presents a qualitative evaluation of the experiences of attending a novel self-management program and initial process evaluation. The program was designed with and for people with dementia. It addresses: (a) relationship with family, (b) maintenance of an active lifestyle, (c) psychological well-being, (d) techniques to cope with memory changes and (e) information about dementia. Six participants with early stage dementia completed the intervention that was co-delivered by lay and clinical professional tutors. Participants and tutors attended focus group and interviews at the end of the program to explore their perceptions of the intervention. These were audio-recorded, transcribed verbatim and analysed thematically. Participants reported enjoyment and benefits from the intervention. This was despite some reporting concerns relating to their memory difficulties. The program's flexible nature, focus on strengths and the opportunity to spend time with other people living with dementia were particularly well received. Participants and tutors outlined areas for further improvement. The program was feasible and its flexible delivery appeared to facilitate participant benefit. Emphasis should be placed on maintaining activity and relationships, improving positive well-being and social interaction during the program. Memory of the pleasant experience and strengths focus was evidenced, which may impact positively on quality of life. The results highlight the usefulness and acceptability of self-management for people with early stage dementia and provide initial support for the program's structure and content. © The Author(s) 2013.

  3. Investigation of fast initialization of spacecraft bubble memory systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1984-01-01

    Bubble domain technology offers significant improvement in reliability and functionality for spacecraft onboard memory applications. In considering potential memory systems organizations, minimization of power in high capacity bubble memory systems necessitates the activation of only the desired portions of the memory. In power strobing arbitrary memory segments, a capability of fast turn on is required. Bubble device architectures, which provide redundant loop coding in the bubble devices, limit the initialization speed. Alternate initialization techniques are investigated to overcome this design limitation. An initialization technique using a small amount of external storage is demonstrated.

  4. Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.

  5. Assessing and treating non-compliance in brain-injured clients.

    PubMed

    Zencius, A H; Lane, I; Wesolowski, M D

    1991-01-01

    Assessment of non-compliance has been discussed. This included exploration of reinforcement contingencies, age appropriateness, cultural background and social background. Several perspectives on this have been addressed. Memory deficits are also critical when assessing non-compliance. Specifically, when the TBI person has severe memory deficits. Consequence management and antecedent control techniques have shown to be highly effective in promoting participation. Additionally, non-compliance should not necessarily be viewed as non-desirable, in fact, the client may be communicating preferred and non-preferred interests. It is important to recognize individual talents, interests and preferences. This is a significant point when you consider that TBI survivors had pre-injury lifestyles, i.e. full-time employment, a working social network, and preferred interests and activities.

  6. Characterization of sputtering deposited NiTi shape memory thin films using a temperature controllable atomic force microscope

    NASA Astrophysics Data System (ADS)

    He, Q.; Huang, W. M.; Hong, M. H.; Wu, M. J.; Fu, Y. Q.; Chong, T. C.; Chellet, F.; Du, H. J.

    2004-10-01

    NiTi shape memory thin films are potentially desirable for micro-electro-mechanical system (MEMS) actuators, because they have a much higher work output per volume and also a significantly improved response speed due to a larger surface-to-volume ratio. A new technique using a temperature controllable atomic force microscope (AFM) is presented in order to find the transformation temperatures of NiTi shape memory thin films of micrometer size, since traditional techniques, such as differential scanning calorimetry (DSC) and the curvature method, have difficulty in dealing with samples of such a scale as this. This technique is based on the surface relief phenomenon in shape memory alloys upon thermal cycling. The reliability of this technique is investigated and compared with the DSC result in terms of the transformation fraction (xgr). It appears that the new technique is nondestructive, in situ and capable of characterizing sputtering deposited very small NiTi shape memory thin films.

  7. Solutions and debugging for data consistency in multiprocessors with noncoherent caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.

    1995-02-01

    We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less

  8. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  9. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  10. Method and apparatus for managing access to a memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less

  11. Catching the engram: strategies to examine the memory trace.

    PubMed

    Sakaguchi, Masanori; Hayashi, Yasunori

    2012-09-21

    Memories are stored within neuronal ensembles in the brain. Modern genetic techniques can be used to not only visualize specific neuronal ensembles that encode memories (e.g., fear, craving) but also to selectively manipulate those neurons. These techniques are now being expanded for the study of various types of memory. In this review, we will summarize the genetic methods used to visualize and manipulate neurons involved in the representation of memory engrams. The methods will help clarify how memory is encoded, stored and processed in the brain. Furthermore, these approaches may contribute to our understanding of the pathological mechanisms associated with human memory disorders and, ultimately, may aid the development of therapeutic strategies to ameliorate these diseases.

  12. Application of source biasing technique for energy efficient DECODER circuit design: memory array application

    NASA Astrophysics Data System (ADS)

    Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav

    2018-04-01

    Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.

  13. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    PubMed

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  14. Static Memory Deduplication for Performance Optimization in Cloud Computing

    PubMed Central

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-01-01

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434

  15. A Randomized Controlled Trial of the Group-Based Modified Story Memory Technique in TBI

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-16-1-0726 TITLE: A Randomized Controlled Trial of the Group -Based Modified Story Memory Technique in TBI PRINCIPAL...2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER A Randomized Controlled Trial of the Group -Based Modified Story Memory Technique in TBI 5b. GRANT...forthcoming, The current study addresses this need through a double blind, placebo- controlled , randomized clinical trial (RCT) of a group

  16. k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.

    PubMed

    Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis

    2015-06-01

    Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.

  17. From network heterogeneities to familiarity detection and hippocampal memory management

    PubMed Central

    Wang, Jane X.; Poe, Gina; Zochowski, Michal

    2009-01-01

    Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453

  18. Conquest of darkness by management of the stars

    NASA Astrophysics Data System (ADS)

    Wiseman, Robert S.

    This text was presented as the Thomas B. Dowd Memorial Lecture for 1991 national Infrared Information Symposium (IRIS). The history of Army Night Vision from World War II to 1972 proves how the right organization with talented people and proper support can succeed. This presentation not only illustrates the growth of image intensifier technology and families of equipment, but the key events and stars that made it all happen. Described are the management techniques used and how to organize for effective research, development, engineering, and production programs; the evolution of the Far Infrared Common Module program is described; and how the Night Vision Laboratory was unique.

  19. Catching the engram: strategies to examine the memory trace

    PubMed Central

    2012-01-01

    Memories are stored within neuronal ensembles in the brain. Modern genetic techniques can be used to not only visualize specific neuronal ensembles that encode memories (e.g., fear, craving) but also to selectively manipulate those neurons. These techniques are now being expanded for the study of various types of memory. In this review, we will summarize the genetic methods used to visualize and manipulate neurons involved in the representation of memory engrams. The methods will help clarify how memory is encoded, stored and processed in the brain. Furthermore, these approaches may contribute to our understanding of the pathological mechanisms associated with human memory disorders and, ultimately, may aid the development of therapeutic strategies to ameliorate these diseases. PMID:22999350

  20. Using Instructional and Motivational Techniques in the Art Classroom To Increase Memory Retention.

    ERIC Educational Resources Information Center

    Calverley, Ann; Grafer, Bonnie; Hauser, Michelle

    This report describes a program for improving memory retention through instructional and motivational techniques in elementary art. Targeted population consisted of third grade students at three sites in a middle class suburb of a large midwestern city. The problems of memory retention were documented through teacher pre-surveys and art memory…

  1. Explaining the Development of False Memories.

    ERIC Educational Resources Information Center

    Reyna, Valerie F.; Holliday, Robyn; Marche, Tammy

    2002-01-01

    Reviews explanatory dimensions of children's false memory relevant to forensic practice: measurement, development, social factors, individual differences, varieties of memories and memory judgments, and varieties of procedures inducing false memories. Asserts that recent studies fail to use techniques that separate acquiescence from memory…

  2. Trichotomous processes in early memory development, aging, and neurocognitive impairment: a unified theory.

    PubMed

    Brainerd, C J; Reyna, V F; Howe, M L

    2009-10-01

    One of the most extensively investigated topics in the adult memory literature, dual memory processes, has had virtually no impact on the study of early memory development. The authors remove the key obstacles to such research by formulating a trichotomous theory of recall that combines the traditional dual processes of recollection and familiarity with a reconstruction process. The theory is then embedded in a hidden Markov model that measures all 3 processes with low-burden tasks that are appropriate for even young children. These techniques are applied to a large corpus of developmental studies of recall, yielding stable findings about the emergence of dual memory processes between childhood and young adulthood and generating tests of many theoretical predictions. The techniques are extended to the study of healthy aging and to the memory sequelae of common forms of neurocognitive impairment, resulting in a theoretical framework that is unified over 4 major domains of memory research: early development, mainstream adult research, aging, and neurocognitive impairment. The techniques are also extended to recognition, creating a unified dual process framework for recall and recognition.

  3. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig Michael; Verzi, Stephen Joseph

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less

  4. Diminishing-cues retrieval practice: A memory-enhancing technique that works when regular testing doesn't.

    PubMed

    Fiechter, Joshua L; Benjamin, Aaron S

    2017-08-28

    Retrieval practice has been shown to be a highly effective tool for enhancing memory, a fact that has led to major changes to educational practice and technology. However, when initial learning is poor, initial retrieval practice is unlikely to be successful and long-term benefits of retrieval practice are compromised or nonexistent. Here, we investigate the benefit of a scaffolded retrieval technique called diminishing-cues retrieval practice (Finley, Benjamin, Hays, Bjork, & Kornell, Journal of Memory and Language, 64, 289-298, 2011). Under learning conditions that favored a strong testing effect, diminishing cues and standard retrieval practice both enhanced memory performance relative to restudy. Critically, under learning conditions where standard retrieval practice was not helpful, diminishing cues enhanced memory performance substantially. These experiments demonstrate that diminishing-cues retrieval practice can widen the range of conditions under which testing can benefit memory, and so can serve as a model for the broader application of testing-based techniques for enhancing learning.

  5. On Russian concepts of Soil Memory - expansion of Dokuchaev's pedological paradigm

    NASA Astrophysics Data System (ADS)

    Tsatskin, A.

    2012-04-01

    Having developed from Dokuchaev's research on chernosem soils on loess, the Russian school of pedology traditionally focused on soils as essential component of landscape. Dokuchaev's soil-landscape paradigm (SLP) was later considerably advanced and expanded to include surface soils on other continents by Hans Jenny. In the 1970s Sokolov and Targulian in Russia introduced the new term of soil memory as an inherent ability of soils to memorize in its morphology and properties the processes of earlier stages of development. This understanding was built upon ideas of soil organizational hierarchy and different rates of specific soil processes as proposed by Yaalon. Soil memory terminology became particularly popular in Russia which is expressed in the 2008 multi-author monograph on soil memory. The Soil Memory book edited by Targulian and Goryachkin and written by 34 authors touches upon the following themes: General approaches (Section 1), Mineral carriers of soil memory (Section 2), Biological carriers of soil memory (section 3) and Anthropogenic soil memory (section 4). The book presents an original account on different new interdisciplinary projects on Russian soils and represents an important contribution into the classical Dokuchaev-Jenny SL paradigm. There is still a controversy as to in what way the Russian term soil memory is related to western terms of soil as a record or archive of earlier events and processes during the time of soil formation. Targulian and Goryachkin agree that all of the terms are close, albeit not entirely interchangeable. They insist that soil memory may have a more comprehensive meaning, e.g. applicable to such complex cases when certain soil properties whose origin is currently ambiguous cannot provide valid environmental reconstructions or dated by available dating techniques. Anyway, not terminology is the main issue. The Russian soil memory concept advances the frontiers of pedology by deepening the time-related soil functions and encouraging closer cooperation with isotope dating experts. This approach will hopefully help us all in better understanding, management and protection of the Earth's critical zone.

  6. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  7. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  8. Six Rehearsal Techniques for the Public Speaker: Improving Memory, Increasing Delivery Skills and Reducing Speech Stress.

    ERIC Educational Resources Information Center

    Crane, Loren D.

    This paper describes six specific techniques that speech communication students may use in rehearsals to improve memory, to increase delivery skills, and to reduce speech stress. The techniques are idea association, covert modeling, desensitization, language elaboration, overt modeling, and self-regulation. Recent research is reviewed that…

  9. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  10. Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File

    DTIC Science & Technology

    2008-12-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across

  11. Sharpen Kids' Memory to Raise Test Scores

    ERIC Educational Resources Information Center

    Willis, Judy

    2005-01-01

    By understanding the different types of memory, the neurophysiology of brain chemical and anatomical changes associated with memory, and the ways to enhance the memory process, teachers can utilize proven technique--and develop their own--to guide students over that bleak terrain of memorization. From simplest recall of awareness, memory skills…

  12. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  13. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  14. High-speed reference-beam-angle control technique for holographic memory drive

    NASA Astrophysics Data System (ADS)

    Yamada, Ken-ichiro; Ogata, Takeshi; Hosaka, Makoto; Fujita, Koji; Okuyama, Atsushi

    2016-09-01

    We developed a holographic memory drive for next-generation optical memory. In this study, we present the key technology for achieving a high-speed transfer rate for reproduction, that is, a high-speed control technique for the reference beam angle. In reproduction in a holographic memory drive, there is the issue that the optimum reference beam angle during reproduction varies owing to distortion of the medium. The distortion is caused by, for example, temperature variation, beam irradiation, and moisture absorption. Therefore, a reference-beam-angle control technique to position the reference beam at the optimum angle is crucial. We developed a new optical system that generates an angle-error-signal to detect the optimum reference beam angle. To achieve the high-speed control technique using the new optical system, we developed a new control technique called adaptive final-state control (AFSC) that adds a second control input to the first one derived from conventional final-state control (FSC) at the time of angle-error-signal detection. We established an actual experimental system employing AFSC to achieve moving control between each page (Page Seek) within 300 µs. In sequential multiple Page Seeks, we were able to realize positioning to the optimum angles of the reference beam that maximize the diffracted beam intensity. We expect that applying the new control technique to the holographic memory drive will enable a giga-bit/s-class transfer rate.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Zhang, Zhao

    With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less

  16. Rehearsal Training and Developmental Differences in Memory

    ERIC Educational Resources Information Center

    Ornstein, Peter A.; And Others

    1977-01-01

    This experiment investigated age differences in memory performance and the extent to which rehearsal techniques contribute to these differences. Second and sixth grade children were trained in a variety of rehearsal techniques in an overt-rehearsal free recall task. (Author/SB)

  17. Bubble memory module for spacecraft application

    NASA Technical Reports Server (NTRS)

    Hayes, P. J.; Looney, K. T.; Nichols, C. D.

    1985-01-01

    Bubble domain technology offers an all-solid-state alternative for data storage in onboard data systems. A versatile modular bubble memory concept was developed. The key module is the bubble memory module which contains all of the storage devices and circuitry for accessing these devices. This report documents the bubble memory module design and preliminary hardware designs aimed at memory module functional demonstration with available commercial bubble devices. The system architecture provides simultaneous operation of bubble devices to attain high data rates. Banks of bubble devices are accessed by a given bubble controller to minimize controller parts. A power strobing technique is discussed which could minimize the average system power dissipation. A fast initialization method using EEPROM (electrically erasable, programmable read-only memory) devices promotes fast access. Noise and crosstalk problems and implementations to minimize these are discussed. Flight memory systems which incorporate the concepts and techniques of this work could now be developed for applications.

  18. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    NASA Astrophysics Data System (ADS)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  19. Simplified Parallel Domain Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less

  20. Network acceleration techniques

    NASA Technical Reports Server (NTRS)

    Crowley, Patricia (Inventor); Maccabe, Arthur Barney (Inventor); Awrach, James Michael (Inventor)

    2012-01-01

    Splintered offloading techniques with receive batch processing are described for network acceleration. Such techniques offload specific functionality to a NIC while maintaining the bulk of the protocol processing in the host operating system ("OS"). The resulting protocol implementation allows the application to bypass the protocol processing of the received data. Such can be accomplished this by moving data from the NIC directly to the application through direct memory access ("DMA") and batch processing the receive headers in the host OS when the host OS is interrupted to perform other work. Batch processing receive headers allows the data path to be separated from the control path. Unlike operating system bypass, however, the operating system still fully manages the network resource and has relevant feedback about traffic and flows. Embodiments of the present disclosure can therefore address the challenges of networks with extreme bandwidth delay products (BWDP).

  1. Neuroimaging techniques for memory detection: scientific, ethical, and legal issues.

    PubMed

    Meegan, Daniel V

    2008-01-01

    There is considerable interest in the use of neuroimaging techniques for forensic purposes. Memory detection techniques, including the well-publicized Brain Fingerprinting technique (Brain Fingerprinting Laboratories, Inc., Seattle WA), exploit the fact that the brain responds differently to sensory stimuli to which it has been exposed before. When a stimulus is specifically associated with a crime, the resulting brain activity should differentiate between someone who was present at the crime and someone who was not. This article reviews the scientific literature on three such techniques: priming, old/new, and P300 effects. The forensic potential of these techniques is evaluated based on four criteria: specificity, automaticity, encoding flexibility, and longevity. This article concludes that none of the techniques are devoid of forensic potential, although much research is yet to be done. Ethical issues, including rights to privacy and against self-incrimination, are discussed. A discussion of legal issues concludes that current memory detection techniques do not yet meet United States standards of legal admissibility.

  2. Holographic implementation of a binary associative memory for improved recognition

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.

    1998-03-01

    Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.

  3. The Fritz Roethlisberger Memorial Award Goes to "Using Leadered Groups in Organizational Behavior and Management Survey Courses"

    ERIC Educational Resources Information Center

    Amoroso, Lisa M.; Loyd, Denise Lewin; Hoobler, Jenny M.

    2012-01-01

    The Fritz J. Roethlisberger Memorial Award for the best article in the 2011 "Journal of Management Education" goes to Rae Andre for her article, Using Leadered Groups in Organizational Behavior and Management Survey Courses ("Journal of Management Education," Volume 35, Number 5, pp. 596-619). In keeping with Roethlisberger's legacy, this year's…

  4. Usefulness of a single item in a mail survey to identify persons with possible dementia: a new strategy for finding high-risk elders.

    PubMed

    Brody, Kathleen K; Maslow, Katie; Perrin, Nancy A; Crooks, Valerie; DellaPenna, Richard; Kuang, Daniel

    2005-04-01

    The objective of this study was to examine the characteristics of elderly persons who responded positively to a question about "severe memory problems" on a mailed health questionnaire yet were missed by the existing health risk algorithm to identify vulnerable elderly persons. A total of 324,471 respondents aged 65 and older completed a primary care health status questionnaire that gathered clinical information to quickly identify members with functional impairment, multiple chronic diseases, and higher medical care needs. The respondents were part of a large, integrated, not-for-profit managed care organization that implemented a model of care for elders using a uniform risk identification method across eight regions. Respondents with severe memory problems were compared to general respondents by morbidity, geriatric syndromes, functional impairments, service utilization, sensory impairments, sociodemographic characteristics, and activities of daily living. Of the respondents, 13,902 persons (4.3%) reported severe memory problems; the existing health risk algorithm missed 47.1% of these. When severe memory problems were included in the risk algorithm, identification increased from 11% to 13%, and risk prevalence by age groups ranged from 4.4% to 40.5%; one third had severe memory problems, a finding that was fairly consistent within age groups (28.4% to 36.5%). A question about severe memory problems should be incorporated into population risk-identification techniques. While false-negative rates are unknown, the false-positive rate of a self-report mail survey appears to be minimal. Persons reporting severe memory problems clearly have multiple comorbidities, higher prevalence of geriatric syndromes, and greater functional and sensory impairments.

  5. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  6. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hsien-Hsin S

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniquesmore » and system software for achieving a robust, secure, and reliable computing system toward our goal.« less

  8. Memory assessment in patients with temporal lobe epilepsy to predict memory impairment after surgery: A systematic review.

    PubMed

    Parra-Díaz, P; García-Casares, N

    2017-04-19

    Given that surgical treatment of refractory mesial temporal lobe epilepsy may cause memory impairment, determining which patients are eligible for surgery is essential. However, there is little agreement on which presurgical memory assessment methods are best able to predict memory outcome after surgery and identify those patients with a greater risk of surgery-induced memory decline. We conducted a systematic literature review to determine which presurgical memory assessment methods best predict memory outcome. The literature search of PubMed gathered articles published between January 2005 and December 2015 addressing pre- and postsurgical memory assessment in mesial temporal lobe epilepsy patients by means of neuropsychological testing, functional MRI, and other neuroimaging techniques. We obtained 178 articles, 31 of which were included in our review. Most of the studies used neuropsychological tests and fMRI; these methods are considered to have the greatest predictive ability for memory impairment. Other less frequently used techniques included the Wada test and FDG-PET. Current evidence supports performing a presurgical assessment of memory function using both neuropsychological tests and functional MRI to predict memory outcome after surgery. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Combat Maintenance Concepts and Repair Techniques Using Shape Memory Alloys for Fluid Lines, Control Tubes, and Drive Shafts.

    DTIC Science & Technology

    1983-03-01

    BUREAU OF STANDARDS-1963-A ,,...:-. .-. -.’" :.- --. . 4 Iq " USAAVRADCOM-TR-82-D-37 COMBAT MAINTENANCE CONCEPTS AND REPAIR TECHNIQUES USING SHAPE MEMORY...O APPLIED TECHNOLOGY LABORATORY POSITION STATEMENT The results of this effort determined the feasibility of using the full-ring shape memory alloy...specifications, or other data are used for any purpose other than in connection with a definitely related Government procurement operation, the United

  10. Up-to-date state of storage techniques used for large numerical data files

    NASA Technical Reports Server (NTRS)

    Chlouba, V.

    1975-01-01

    Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.

  11. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  12. Memory and Study Strategies for Optimal Learning.

    ERIC Educational Resources Information Center

    Hamachek, Alice L.

    Study strategies are those specific reading skills that increase understanding, memory storage, and retrieval. Memory techniques are crucial to effective studying, and to subsequent performance in class and on written examinations. A major function of memory is to process information. Stimuli are picked up by sensory receptors and transferred to…

  13. Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques

    ERIC Educational Resources Information Center

    Ullman, Michael T.; Lovelett, Jarrett T.

    2018-01-01

    The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…

  14. A Cure for Short Memories.

    ERIC Educational Resources Information Center

    Instructor, 1983

    1983-01-01

    This article explains two techniques for helping students develop long-term memory skills and retain information taught in class. One technique relies on mental pictures to keep track of a numbered series of items; the other depends on key words derived from the material that must be memorized. (PP)

  15. Direct access inter-process shared memory

    DOEpatents

    Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B

    2013-10-22

    A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.

  16. Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control

    PubMed Central

    Loft, Shayne; Smith, Rebekah E.; Remington, Roger

    2015-01-01

    Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825

  17. Memories for life: a review of the science and technology

    PubMed Central

    O'Hara, Kieron; Morris, Richard; Shadbolt, Nigel; Hitch, Graham J; Hall, Wendy; Beagrie, Neil

    2006-01-01

    This paper discusses scientific, social and technological aspects of memory. Recent developments in our understanding of memory processes and mechanisms, and their digital implementation, have placed the encoding, storage, management and retrieval of information at the forefront of several fields of research. At the same time, the divisions between the biological, physical and the digital worlds seem to be dissolving. Hence, opportunities for interdisciplinary research into memory are being created, between the life sciences, social sciences and physical sciences. Such research may benefit from immediate application into information management technology as a testbed. The paper describes one initiative, memories for life, as a potential common problem space for the various interested disciplines. PMID:16849265

  18. Theta-burst microstimulation in the human entorhinal area improves memory specificity.

    PubMed

    Titiz, Ali S; Hill, Michael R H; Mankin, Emily A; M Aghajan, Zahra; Eliashiv, Dawn; Tchemodanov, Natalia; Maoz, Uri; Stern, John; Tran, Michelle E; Schuette, Peter; Behnke, Eric; Suthana, Nanthia A; Fried, Itzhak

    2017-10-24

    The hippocampus is critical for episodic memory, and synaptic changes induced by long-term potentiation (LTP) are thought to underlie memory formation. In rodents, hippocampal LTP may be induced through electrical stimulation of the perforant path. To test whether similar techniques could improve episodic memory in humans, we implemented a microstimulation technique that allowed delivery of low-current electrical stimulation via 100 μm -diameter microelectrodes. As thirteen neurosurgical patients performed a person recognition task, microstimulation was applied in a theta-burst pattern, shown to optimally induce LTP. Microstimulation in the right entorhinal area during learning significantly improved subsequent memory specificity for novel portraits; participants were able both to recognize previously-viewed photos and reject similar lures. These results suggest that microstimulation with physiologic level currents-a radical departure from commonly used deep brain stimulation protocols-is sufficient to modulate human behavior and provides an avenue for refined interrogation of the circuits involved in human memory.

  19. Development and Evaluation of a Casualty Evacuation Model for a European Conflict.

    DTIC Science & Technology

    1985-12-01

    EVAC, the computer code which implements our technique, has been used to solve a series of test problems in less time and requiring less memory than...the order of 1/K the amount of main memory for a K-commodity problem, so it can solve significantly larger problems than MCNF. I . 10 CHAPTER II A...technique may require only half the memory of the general L.P. package [6]. These advances are due to the efficient data structures which have been

  20. Enhancing Memory in Your Students: COMPOSE Yourself!

    ERIC Educational Resources Information Center

    Rotter, Kathleen M.

    2009-01-01

    The essence of teaching is, in fact, creating new memories for your students. The teacher's role is to help students store the correct information (memories) in ways that make recall and future access and use likely. Therefore, choosing techniques to enhance memory is possibly the most critical aspect of instructional design. COMPOSE is an acronym…

  1. Photonic Diagnostic Technique For Thin Photoactive Films

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita

    1996-01-01

    Photonic diagnostic technique developed for use in noninvasive, rapid evaluation of thin paraelectric/ferroelectric films. Method proves useful in basic research, on-line monitoring for quality control at any stage of fabrication, and development of novel optoelectronic systems. Used to predict imprint-prone memory cells, and to study time evolution of defects in ferroelectric memories during processing. Plays vital role in enabling high-density ferroelectric memory manufacturing. One potential application lies in use of photoresponse for nondestructive readout of polarization memory states in high-density, high-speed memory devices. In another application, extension of basic concept of method makes possible to develop specially tailored ferrocapacitor to act as programmable detector, wherein remanent polarization used to modulate photoresponse. Large arrays of such detectors useful in optoelectronic processing, computing, and communication.

  2. Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989

    NASA Technical Reports Server (NTRS)

    Raugh, Michael R. (Editor)

    1989-01-01

    Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis.

  3. Weighing the value of memory loss in the surgical evaluation of left temporal lobe epilepsy: A decision analysis

    PubMed Central

    Akama-Garren, Elliot H.; Bianchi, Matt T.; Leveroni, Catherine; Cole, Andrew J.; Cash, Sydney S.; Westover, M. Brandon

    2016-01-01

    SUMMARY Objectives Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. Methods We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. Results For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Significance Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. PMID:25244498

  4. Weighing the value of memory loss in the surgical evaluation of left temporal lobe epilepsy: a decision analysis.

    PubMed

    Akama-Garren, Elliot H; Bianchi, Matt T; Leveroni, Catherine; Cole, Andrew J; Cash, Sydney S; Westover, M Brandon

    2014-11-01

    Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  5. Artificial Intelligence and Information Management

    NASA Astrophysics Data System (ADS)

    Fukumura, Teruo

    After reviewing the recent popularization of the information transmission and processing technologies, which are supported by the progress of electronics, the authors describe that by the introduction of the opto-electronics into the information technology, the possibility of applying the artificial intelligence (AI) technique to the mechanization of the information management has emerged. It is pointed out that althuogh AI deals with problems in the mental world, its basic methodology relies upon the verification by evidence, so the experiment on computers become indispensable for the study of AI. The authors also describe that as computers operate by the program, the basic intelligence which is concerned in AI is that expressed by languages. This results in the fact that the main tool of AI is the logical proof and it involves an intrinsic limitation. To answer a question “Why do you employ AI in your problem solving”, one must have ill-structured problems and intend to conduct deep studies on the thinking and the inference, and the memory and the knowledge-representation. Finally the authors discuss the application of AI technique to the information management. The possibility of the expert-system, processing of the query, and the necessity of document knowledge-base are stated.

  6. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  7. Measuring Memory and Attention to Preview in Motion.

    PubMed

    Jagacinski, Richard J; Hammond, Gordon M; Rizzi, Emanuele

    2017-08-01

    Objective Use perceptual-motor responses to perturbations to reveal the spatio-temporal detail of memory for the recent past and attention to preview when participants track a winding roadway. Background Memory of the recently passed roadway can be inferred from feedback control models of the participants' manual movement patterns. Similarly, attention to preview of the upcoming roadway can be inferred from feedforward control models of manual movement patterns. Method Perturbation techniques were used to measure these memory and attention functions. Results In a laboratory tracking task, the bandwidth of lateral roadway deviations was found to primarily influence memory for the past roadway rather than attention to preview. A secondary auditory/verbal/vocal memory task resulted in higher velocity error and acceleration error in the tracking task but did not affect attention to preview. Attention to preview was affected by the frequency pattern of sinusoidal perturbations of the roadway. Conclusion Perturbation techniques permit measurement of the spatio-temporal span of memory and attention to preview that affect tracking a winding roadway. They also provide new ways to explore goal-directed forgetting and spatially distributed attention in the context of movement. More generally, these techniques provide sensitive measures of individual differences in cognitive aspects of action. Application Models of driving behavior and assessment of driving skill may benefit from more detailed spatio-temporal measurement of attention to preview.

  8. Plastic surgeons and the management of trauma: from the JFK assassination to the Boston Marathon bombing.

    PubMed

    Luce, Edward A; Hollier, Larry H; Lin, Samuel J

    2013-11-01

    The fiftieth anniversary of the death by assassination of President John Kennedy is an opportunity to pay homage to his memory and also reflect on the important role plastic surgeons have played in the management of trauma. That reflection included a hypothetical scenario, a discussion of the surgical treatment of Kennedy (if he survived) and Governor Connally. The scenario describes the management of cranioplasty in the presence of scalp soft-tissue contracture, reconstruction of the proximal trachea, reconstitution of the abdominal wall, and restoration of a combined radius and soft-tissue defect. The development of diagnostic and therapeutic advances over the past 50 years in the care of maxillofacial trauma is described, including the evolution of imaging, timing of surgery, and operative techniques. Finally, contemporary measures of triage in situations involving mass casualties, as in the Boston Marathon bombings, complete the dedication to President Kennedy.

  9. Goal-Driven Autonomy and Robust Architecture for Long-Duration Missions (Year 1: 1 July 2013 - 31 July 2014)

    DTIC Science & Technology

    2014-09-30

    Mental Domain = Ω Goal Management goal change goal input World =Ψ Memory Mission & Goals( ) World Model (-Ψ) Episodic Memory Semantic Memory ...Activations Trace Meta-Level Control Introspective Monitoring Memory Reasoning Trace ( ) Strategies Episodic Memory Metaknowledge Self Model...it is from incorrect or missing memory associations (i.e., indices). Similarly, correct information may exist in the input stream, but may not be

  10. Event-Based Prospective Memory Is Independently Associated with Self-Report of Medication Management in Older Adults

    PubMed Central

    Woods, Steven Paul; Weinborn, Michael; Maxwell, Brenton R.; Gummery, Alice; Mo, Kevin; Ng, Amanda R. J.; Bucks, Romola S.

    2014-01-01

    Background Identifying potentially modifiable risk factors for medication non-adherence in older adults is important in order to enhance screening and intervention efforts designed to improve medication-taking behavior and health outcomes. The current study sought to determine the unique contribution of prospective memory (i.e., “remembering to remember”) to successful self-reported medication management in older adults. Methods Sixty-five older adults with current medication prescriptions completed a comprehensive research evaluation of sociodemographic, psychiatric, and neurocognitive functioning, which included the Memory for Adherence to Medication Scale (MAMS), Prospective and Retrospective Memory Questionnaire (PRMQ), and a performance-based measure of prospective memory that measured both semantically-related and semantically-unrelated cue-intention (i.e., when-what) pairings. Results A series of hierarchical regressions controlling for biopsychosocial, other neurocognitive, and medication-related factors showed that elevated complaints on the PM scale of the PRMQ and worse performance on an objective semantically-unrelated event-based prospective memory task were independent predictors of poorer medication adherence as measured by the MAMS. Conclusions Prospective memory plays an important role in self-report of successful medication management among older adults. Findings may have implications for screening for older individuals “at risk” of non-adherence, as well as the development of prospective memory-based interventions to improve medication adherence and, ultimately, long-term health outcomes in older adults. PMID:24410357

  11. Time: a vital resource.

    PubMed

    Collins, Sandra K; Collins, Kevin S

    2004-01-01

    Resolving problems with time management requires an understanding of the concept of working smarter rather than harder. Therefore, managing time effectively is a vital responsibility of department managers. When developing a plan for more effectively managing time, it is important to carefully analyze where time is currently being used/lost. Keeping a daily log can be a time consuming effort. However, the log can provide information about ways that time may be saved and how to organize personal schedules to maximize time efficiency. The next step is to develop a strategy to decrease wasted time and create a more cohesive radiology department. The following list of time management strategies provides some suggestions for developing a plan. Get focused. Set goals and priorities. Get organized. Monitor individual motivation factors. Develop memory techniques. In healthcare, success means delivering the highest quality of care by getting organized, meeting deadlines, creating efficient schedules and appropriately budgeting resources. Effective time management focuses on knowing what needs to be done when. The managerial challenge is to shift the emphasis from doing everything all at once to orchestrating the departmental activities in order to maximize the time given in a normal workday.

  12. Improving Working Memory Efficiency by Reframing Metacognitive Interpretation of Task Difficulty

    ERIC Educational Resources Information Center

    Autin, Frederique; Croizet, Jean-Claude

    2012-01-01

    Working memory capacity, our ability to manage incoming information for processing purposes, predicts achievement on a wide range of intellectual abilities. Three randomized experiments (N = 310) tested the effectiveness of a brief psychological intervention designed to boost working memory efficiency (i.e., state working memory capacity) by…

  13. Impact of the Educational Boost Your Brain and Memory Program Among Senior Living Residents.

    PubMed

    Nicholson, Roscoe; O'Brien, Catherine

    2017-12-01

    This random assignment waitlist control intervention study examined an implementation of the educational Boost Your Brain and Memory cognitive fitness intervention in 12 senior living organizations. Older adult participants ( n = 166) completed measures of brain health knowledge, use of memory techniques, physical and intellectual activity, and mindfulness, at baseline and after the intervention group's completion of the course. Changes in knowledge scores and in self-reported physical and intellectual activity increased significantly more for intervention participants than for waitlist controls at the conclusion of the course. There were no significant changes between the groups in mindfulness or use of memory techniques. This suggests that in senior living settings Boost Your Brain and Memory is effective in educating participants about brain healthy behaviors and in motivating behavioral change in the areas of physical and intellectual activity.

  14. Predicting Retrograde Autobiographical Memory Changes Following Electroconvulsive Therapy: Relationships between Individual, Treatment, and Early Clinical Factors.

    PubMed

    Martin, Donel M; Gálvez, Verònica; Loo, Colleen K

    2015-06-19

    Loss of personal memories experienced prior to receiving electroconvulsive therapy is common and distressing and in some patients can persist for many months following treatment. Improved understanding of the relationships between individual patient factors, electroconvulsive therapy treatment factors, and clinical indicators measured early in the electroconvulsive therapy course may help clinicians minimize these side effects through better management of the electroconvulsive therapy treatment approach. In this study we examined the associations between the above factors for predicting retrograde autobiographical memory changes following electroconvulsive therapy. Seventy-four depressed participants with major depressive disorder were administered electroconvulsive therapy 3 times per week using either a right unilateral or bitemporal electrode placement and brief or ultrabrief pulse width. Verbal fluency and retrograde autobiographical memory (assessed using the Columbia Autobiographical Memory Interview - Short Form) were tested at baseline and after the last electroconvulsive therapy treatment. Time to reorientation was measured immediately following the third and sixth electroconvulsive therapy treatments. Results confirmed the utility of measuring time to reorientation early during the electroconvulsive therapy treatment course as a predictor of greater retrograde amnesia and the importance of assessing baseline cognitive status for identifying patients at greater risk for developing later side effects. With increased number of electroconvulsive therapy treatments, older age was associated with increased time to reorientation. Consistency of verbal fluency performance was moderately correlated with change in Columbia Autobiographical Memory Interview - Short Form scores following right unilateral electroconvulsive therapy. Electroconvulsive therapy treatment techniques associated with lesser cognitive side effects should be particularly considered for patients with lower baseline cognitive status or older age. © The Author 2015. Published by Oxford University Press on behalf of CINP.

  15. Predicting Retrograde Autobiographical Memory Changes Following Electroconvulsive Therapy: Relationships between Individual, Treatment, and Early Clinical Factors

    PubMed Central

    Gálvez, Verònica; Loo, Colleen K.

    2015-01-01

    Background: Loss of personal memories experienced prior to receiving electroconvulsive therapy is common and distressing and in some patients can persist for many months following treatment. Improved understanding of the relationships between individual patient factors, electroconvulsive therapy treatment factors, and clinical indicators measured early in the electroconvulsive therapy course may help clinicians minimize these side effects through better management of the electroconvulsive therapy treatment approach. In this study we examined the associations between the above factors for predicting retrograde autobiographical memory changes following electroconvulsive therapy. Methods: Seventy-four depressed participants with major depressive disorder were administered electroconvulsive therapy 3 times per week using either a right unilateral or bitemporal electrode placement and brief or ultrabrief pulse width. Verbal fluency and retrograde autobiographical memory (assessed using the Columbia Autobiographical Memory Interview – Short Form) were tested at baseline and after the last electroconvulsive therapy treatment. Time to reorientation was measured immediately following the third and sixth electroconvulsive therapy treatments. Results: Results confirmed the utility of measuring time to reorientation early during the electroconvulsive therapy treatment course as a predictor of greater retrograde amnesia and the importance of assessing baseline cognitive status for identifying patients at greater risk for developing later side effects. With increased number of electroconvulsive therapy treatments, older age was associated with increased time to reorientation. Consistency of verbal fluency performance was moderately correlated with change in Columbia Autobiographical Memory Interview – Short Form scores following right unilateral electroconvulsive therapy. Conclusions: Electroconvulsive therapy treatment techniques associated with lesser cognitive side effects should be particularly considered for patients with lower baseline cognitive status or older age. PMID:26091817

  16. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  17. Two-layer symbolic representation for stochastic models with phase-type distributed events

    NASA Astrophysics Data System (ADS)

    Longo, Francesco; Scarpa, Marco

    2015-07-01

    Among the techniques that have been proposed for the analysis of non-Markovian models, the state space expansion approach showed great flexibility in terms of modelling capacities.The principal drawback is the explosion of the state space. This paper proposes a two-layer symbolic method for efficiently storing the expanded reachability graph of a non-Markovian model in the case in which continuous phase-type distributions are associated with the firing times of system events, and different memory policies are considered. At the lower layer, the reachability graph is symbolically represented in the form of a set of Kronecker matrices, while, at the higher layer, all the information needed to correctly manage event memory is stored in a multi-terminal multi-valued decision diagram. Such an information is collected by applying a symbolic algorithm, which is based on a couple of theorems. The efficiency of the proposed approach, in terms of memory occupation and execution time, is shown by applying it to a set of non-Markovian stochastic Petri nets and comparing it with a classical explicit expansion algorithm. Moreover, a comparison with a classical symbolic approach is performed whenever possible.

  18. Psychosocial treatment of posttraumatic stress disorder: a practice-friendly review of outcome research.

    PubMed

    Solomon, Susan D; Johnson, Dawn M

    2002-08-01

    A review of the treatment research indicates that several forms of therapy appear to be useful in reducing the symptoms of posttraumatic stress disorder (PTSD). Strongest support is found for the treatments that combine cognitive and behavioral techniques. Hypnosis, psychodynamic, anxiety management, and group therapies also may produce short-term symptom reduction. Still unknown is whether any approach produces lasting effects. Imaginal exposure to trauma memories and hypnosis are techniques most likely to affect the intrusive symptoms of PTSD, while cognitive and psychodynamic approaches may address better the numbing and avoidance symptoms. Treatment should be tailored to the severity and type of presenting PTSD symptoms, to the type of trauma experience, and to the many likely comorbid diagnoses and adjustment problems. Copyright 2002 Wiley Periodicals, Inc.

  19. Comparison of Groundwater Level Models Based on Artificial Neural Networks and ANFIS

    PubMed Central

    Domazet, Milka; Stricevic, Ruzica; Pocuca, Vesna; Spalevic, Velibor; Pivic, Radmila; Gregoric, Enika; Domazet, Uros

    2015-01-01

    Water table forecasting plays an important role in the management of groundwater resources in agricultural regions where there are drainage systems in river valleys. The results presented in this paper pertain to an area along the left bank of the Danube River, in the Province of Vojvodina, which is the northern part of Serbia. Two soft computing techniques were used in this research: an adaptive neurofuzzy inference system (ANFIS) and an artificial neural network (ANN) model for one-month water table forecasts at several wells located at different distances from the river. The results suggest that both these techniques represent useful tools for modeling hydrological processes in agriculture, with similar computing and memory capabilities, such that they constitute an exceptionally good numerical framework for generating high-quality models. PMID:26759830

  20. Comparison of Groundwater Level Models Based on Artificial Neural Networks and ANFIS.

    PubMed

    Djurovic, Nevenka; Domazet, Milka; Stricevic, Ruzica; Pocuca, Vesna; Spalevic, Velibor; Pivic, Radmila; Gregoric, Enika; Domazet, Uros

    2015-01-01

    Water table forecasting plays an important role in the management of groundwater resources in agricultural regions where there are drainage systems in river valleys. The results presented in this paper pertain to an area along the left bank of the Danube River, in the Province of Vojvodina, which is the northern part of Serbia. Two soft computing techniques were used in this research: an adaptive neurofuzzy inference system (ANFIS) and an artificial neural network (ANN) model for one-month water table forecasts at several wells located at different distances from the river. The results suggest that both these techniques represent useful tools for modeling hydrological processes in agriculture, with similar computing and memory capabilities, such that they constitute an exceptionally good numerical framework for generating high-quality models.

  1. Investigation and design of a Project Management Decision Support System for the 4950th Test Wing.

    DTIC Science & Technology

    1986-03-01

    all decision makers is the need for memory aids (reports, hand written notes, mental memory joggers, etc.). 4. Even in similar decision making ... memories to synthesize a decision- making process based on their individual styles, skills, and knowledge (Sprague, 1982: 106). Control mechanisms...representations shown in Figures 4.9 and 4.10 provide a means to this objective. By enabling a manager to make and record reasonable changes to

  2. Nonvolatile memory thin film transistors using CdSe/ZnS quantum dot-poly(methyl methacrylate) composite layer formed by a two-step spin coating technique

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Chih; Huang, Chun-Yuan; Yu, Hsin-Chieh; Su, Yan-Kuin

    2012-08-01

    The nonvolatile memory thin film transistors (TFTs) using a core/shell CdSe/ZnS quantum dot (QD)-poly(methyl methacrylate) (PMMA) composite layer as the floating gate have been demonstrated, with the device configuration of n+-Si gate/SiO2 insulator/QD-PMMA composite layer/pentacene channel/Au source-drain being proposed. To achieve the QD-PMMA composite layer, a two-step spin coating technique was used to successively deposit QD-PMMA composite and PMMA on the insulator. After the processes, the variation of crystal quality and surface morphology of the subsequent pentacene films characterized by x-ray diffraction spectra and atomic force microscopy was correlated to the two-step spin coating. The crystalline size of pentacene was improved from 147.9 to 165.2 Å, while the degree of structural disorder was decreased from 4.5% to 3.1% after the adoption of this technique. In pentacene-based TFTs, the improvement of the performance was also significant, besides the appearances of strong memory characteristics. The memory behaviors were attributed to the charge storage/discharge effect in QD-PMMA composite layer. Under the programming and erasing operations, programmable memory devices with the memory window (Δ Vth) = 23 V and long retention time were obtained.

  3. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  4. Configurable memory system and method for providing atomic counting operations in a memory device

    DOEpatents

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  5. Evidence-Based Practice for the Use of Internal Strategies as a Memory Compensation Technique After Brain Injury: A Systematic Review.

    PubMed

    OʼNeil-Pirozzi, Therese M; Kennedy, Mary R T; Sohlberg, McKay M

    2016-01-01

    To complete a systematic review of internal memory strategy use with people who have brain injury and provide practitioners with information that will impact their clinical work. A systematic literature search to identify published intervention studies that evaluated an internal memory strategy or technique to improve memory function of individuals with brain injury. Relevant data from reviewed articles were coded using 4 clinical questions targeting participants, interventions, research methods, and outcomes. A comprehensive search identified 130 study citations and abstracts. Forty-six met inclusion/exclusion criteria and were systematically reviewed. Visual imagery was most frequently studied, in isolation or in combination with other internal strategies. Despite significant variability in research methods and outcomes across studies, the evidence provides impetus for use of internal memory strategies with individuals following brain injury. Individuals with traumatic brain injury may benefit from internal memory strategy use, and clinicians should consider internal memory strategy instruction as part of intervention plans. Further research needs to better delineate influences on intervention candidacy and outcomes.

  6. State of the art on targeted memory reactivation: Sleep your way to enhanced cognition.

    PubMed

    Schouten, Daphne I; Pereira, Sofia I R; Tops, Mattie; Louzada, Fernando M

    2017-04-01

    Targeted memory reactivation is a fairly simple technique that has the potential to influence the course of memory formation through application of cues during sleep. Studies have shown that cueing memory during sleep can lead to either an enhanced or decreased representation of the information encoded in the targeted networks, depending on experimental variations. The effects have been associated with sleep parameters and accompanied by activation of memory related brain areas. The findings suggest a causal role of neuronal replay in memory consolidation and provide evidence for the active system consolidation hypothesis. However, the observed inconsistencies across studies suggest that further research is warranted regarding the underlying neural mechanisms and optimal conditions for the application of targeted memory reactivation. The goal of the present review is to integrate the currently available experimental data and to provide an overview of this technique's limitations and pitfalls, as well as its potential applications in everyday use and clinical treatment. Exploring the open questions herein identified should lead to insight into safer and more effective ways of adjusting memory representations to better suit individual needs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Memory for self-generated narration in the elderly.

    PubMed

    Drevenstedt, J; Bellezza, F S

    1993-06-01

    The story mnemonic technique, an effective encoding and retrieval strategy for young adults, was used as a procedure to study encoding and recall in elderly women. Experiment 1 (15 undergraduate and 14 elderly women) showed the technique to be reliable over 3 weeks and without practice effects in both age groups. In Experiment 2, 67 elderly women (mean age = 72 years) were found to make up 3 distinctive subgroupings in patterns of narration cohesiveness and recall accuracy, consistent with pilot data on the technique. A stepwise multiple regression equation found narration cohesiveness, an adaptation of the Daneman-Carpenter (1980) working-memory measure and vocabulary to predict word recall. Results suggested that a general memory factor differentiated the 3 elderly subgroups.

  8. Soft errors in commercial off-the-shelf static random access memories

    NASA Astrophysics Data System (ADS)

    Dilillo, L.; Tsiligiannis, G.; Gupta, V.; Bosser, A.; Saigne, F.; Wrobel, F.

    2017-01-01

    This article reviews state-of-the-art techniques for the evaluation of the effect of radiation on static random access memory (SRAM). We detailed irradiation test techniques and results from irradiation experiments with several types of particles. Two commercial SRAMs, in 90 and 65 nm technology nodes, were considered as case studies. Besides the basic static and dynamic test modes, advanced stimuli for the irradiation tests were introduced, as well as statistical post-processing techniques allowing for deeper analysis of the correlations between bit-flip cross-sections and design/architectural characteristics of the memory device. Further insight is provided on the response of irradiated stacked layer devices and on the use of characterized SRAM devices as particle detectors.

  9. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  10. Short-Term Memory; An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Reynolds, Donald; Rosenblatt, Richard D.

    This annotated bibliography on memory is divided into 12 areas: information theory; proactive and retroactive interference and interpolated activities; set, subject strategies, and coding techniques; paired associate studies; simultaneous listening and memory span studies; rate and mode of stimulus presentation; rate and order of recall, and…

  11. On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A

    NASA Astrophysics Data System (ADS)

    Blake, J. B.; Mandel, R.

    1987-02-01

    The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.

  12. Insights from neuropsychology: pinpointing the role of the posterior parietal cortex in episodic and working memory

    PubMed Central

    Berryhill, Marian E.

    2012-01-01

    The role of posterior parietal cortex (PPC) in various forms of memory is a current topic of interest in the broader field of cognitive neuroscience. This large cortical region has been linked with a wide range of mnemonic functions affecting each stage of memory processing: encoding, maintenance, and retrieval. Yet, the precise role of the PPC in memory remains mysterious and controversial. Progress in understanding PPC function will require researchers to incorporate findings in a convergent manner from multiple experimental techniques rather than emphasizing a particular type of data. To facilitate this process, here, we review findings from the human neuropsychological research and examine the consequences to memory following PPC damage. Recent patient-based research findings have investigated two typically disconnected fields: working memory (WM) and episodic memory. The findings from patient participants with unilateral and bilateral PPC lesions performing diverse experimental paradigms are summarized. These findings are then related to findings from other techniques including neurostimulation (TMS and tDCS) and the influential and more abundant functional neuroimaging literature. We then review the strengths and weaknesses of hypotheses proposed to account for PPC function in these forms of memory. Finally, we address what missing evidence is needed to clarify the role(s) of the PPC in memory. PMID:22701406

  13. Man-rated flight software for the F-8 DFBW program

    NASA Technical Reports Server (NTRS)

    Bairnsfather, R. R.

    1976-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program assembly control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools are described, as well as the program test plans and their implementation on the various simulators. Failure effects analysis and the creation of special failure generating software for testing purposes are described.

  14. 78 FR 23866 - Airworthiness Directives; the Boeing Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... operational software in the cabin management system, and loading new software into the mass memory card. The...-200 and -300 series airplanes. The proposed AD would have required installing new operational software in the cabin management system, and loading new software into the mass memory card. Since the...

  15. Hemiboreal forest: natural disturbances and the importance of ecosystem legacies to management

    Treesearch

    Kalev Jogiste; Henn Korjus; John Stanturf; Lee E. Frelich; Endijs Baders; Janis Donis; Aris Jansons; Ahto Kangur; Kajar Koster; Diana Laarmann; Tiit Maaten; Vitas Marozas; Marek Metslaid; Kristi Nigul; Olga Polyachenko; Tiit Randveer; Floortje Vodde

    2017-01-01

    The condition of forest ecosystems depends on the temporal and spatial pattern of management interventions and natural disturbances. Remnants of previous conditions persisting after disturbances, or ecosystem legacies, collectively comprise ecosystem memory. Ecosystem memory in turn contributes to resilience and possibilities of ecosystem reorganization...

  16. Memory function and supportive technology

    PubMed Central

    Charness, Neil; Best, Ryan; Souders, Dustin

    2013-01-01

    Episodic and working memory processes show pronounced age-related decline, with other memory processes such as semantic, procedural, and metamemory less affected. Older adults tend to complain the most about prospective and retrospective memory failures. We introduce a framework for deciding how to mitigate memory decline using augmentation and substitution and discuss techniques that change the user, through mnemonics training, and change the tool or environment, by providing environmental support. We provide examples of low-tech and high-tech memory supports and discuss constraints on the utility of high-tech systems including effectiveness of devices, attitudes toward memory aids, and reliability of systems. PMID:24379752

  17. Mnemonic Strategies: Creating Schemata for Learning Enhancement

    ERIC Educational Resources Information Center

    Goll, Paulette S.

    2004-01-01

    This article investigates the process of remembering and presents techniques to improve memory retention. Examples of association, clustering, imagery, location, mnemonic devices and visualization illustrate strategies that can be used to encode and recall information from the long-term memory. Several memory games offer the opportunity to test…

  18. Preventing the return of fear in humans using reconsolidation update mechanisms.

    PubMed

    Schiller, Daniela; Monfils, Marie-H; Raio, Candace M; Johnson, David C; Ledoux, Joseph E; Phelps, Elizabeth A

    2010-01-07

    Recent research on changing fears has examined targeting reconsolidation. During reconsolidation, stored information is rendered labile after being retrieved. Pharmacological manipulations at this stage result in an inability to retrieve the memories at later times, suggesting that they are erased or persistently inhibited. Unfortunately, the use of these pharmacological manipulations in humans can be problematic. Here we introduce a non-invasive technique to target the reconsolidation of fear memories in humans. We provide evidence that old fear memories can be updated with non-fearful information provided during the reconsolidation window. As a consequence, fear responses are no longer expressed, an effect that lasted at least a year and was selective only to reactivated memories without affecting others. These findings demonstrate the adaptive role of reconsolidation as a window of opportunity to rewrite emotional memories, and suggest a non-invasive technique that can be used safely in humans to prevent the return of fear.

  19. Recurrent Neural Networks With Auxiliary Memory Units.

    PubMed

    Wang, Jianyong; Zhang, Lei; Guo, Quan; Yi, Zhang

    2018-05-01

    Memory is one of the most important mechanisms in recurrent neural networks (RNNs) learning. It plays a crucial role in practical applications, such as sequence learning. With a good memory mechanism, long term history can be fused with current information, and can thus improve RNNs learning. Developing a suitable memory mechanism is always desirable in the field of RNNs. This paper proposes a novel memory mechanism for RNNs. The main contributions of this paper are: 1) an auxiliary memory unit (AMU) is proposed, which results in a new special RNN model (AMU-RNN), separating the memory and output explicitly and 2) an efficient learning algorithm is developed by employing the technique of error flow truncation. The proposed AMU-RNN model, together with the developed learning algorithm, can learn and maintain stable memory over a long time range. This method overcomes both the learning conflict problem and gradient vanishing problem. Unlike the traditional method, which mixes the memory and output with a single neuron in a recurrent unit, the AMU provides an auxiliary memory neuron to maintain memory in particular. By separating the memory and output in a recurrent unit, the problem of learning conflicts can be eliminated easily. Moreover, by using the technique of error flow truncation, each auxiliary memory neuron ensures constant error flow during the learning process. The experiments demonstrate good performance of the proposed AMU-RNNs and the developed learning algorithm. The method exhibits quite efficient learning performance with stable convergence in the AMU-RNN learning and outperforms the state-of-the-art RNN models in sequence generation and sequence classification tasks.

  20. Development of a high capacity bubble domain memory element and related epitaxial garnet materials for application in spacecraft data recorders. Item 2: The optimization of material-device parameters for application in bubble domain memory elements for spacecraft data recorders

    NASA Technical Reports Server (NTRS)

    Besser, P. J.

    1976-01-01

    Bubble domain materials and devices are discussed. One of the materials development goals was a materials system suitable for operation of 16 micrometer period bubble domain devices at 150 kHz over the temperature range -10 C to +60 C. Several material compositions and hard bubble suppression techniques were characterized and the most promising candidates were evaluated in device structures. The technique of pulsed laser stroboscopic microscopy was used to characterize bubble dynamic properties and device performance at 150 kHz. Techniques for large area LPE film growth were developed as a separate task. Device studies included detector optimization, passive replicator design and test and on-chip bridge evaluation. As a technology demonstration an 8 chip memory cell was designed, tested and delivered. The memory elements used in the cell were 10 kilobit serial registers.

  1. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Astrophysics Data System (ADS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-09-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  2. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  3. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    PubMed

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  4. FORENSIC ANALYSIS OF WINDOW’S® VIRTUAL MEMORY INCORPORATING THE SYSTEM’S PAGEFILE COUNTERINTELLIGENCE THROUGH MALICIOUS CODE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jared Stimson

    FORENSIC ANALYSIS OF WINDOW’S® VIRTUAL MEMORY INCORPORATING THE SYSTEM’S PAGEFILE Computer Forensics is concerned with the use of computer investigation and analysis techniques in order to collect evidence suitable for presentation in court. The examination of volatile memory is a relatively new but important area in computer forensics. More recently criminals are becoming more forensically aware and are now able to compromise computers without accessing the hard disk of the target computer. This means that traditional incident response practice of pulling the plug will destroy the only evidence of the crime. While some techniques are available for acquiring the contentsmore » of main memory, few exist which can analyze these data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across physical memory or the hard disk, making it very difficult to recover useful information. This report will focus on how these disparate sources of information can be combined to give a single, contiguous address space for each process. Using address translation a tool is developed to reconstruct the virtual address space of a process by combining a physical memory dump with the page-file on the hard disk. COUNTERINTELLIGENCE THROUGH MALICIOUS CODE ANALYSIS As computer network technology continues to grow so does the reliance on this technology for everyday business functionality. To appeal to customers and employees alike, businesses are seeking an increased online prescience, and to increase productivity the same businesses are computerizing their day-to-day operations. The combination of a publicly accessible interface to the businesses network, and the increase in the amount of intellectual property present on these networks presents serious risks. All of this intellectual property now faces constant attacks from a wide variety of malicious software that is intended to uncover company and government secrets. Every year billions of dollars are invested in preventing and recovering from the introduction of malicious code into a system. However, there is little research being done on leveraging these attacks for counterintelligence opportunities. With the ever-increasing number of vulnerable computers on the Internet the task of attributing these attacks to an organization or a single person is a daunting one. This thesis will demonstrate the idea of intentionally running a piece of malicious code in a secure environment in order to gain counterintelligence on an attacker.« less

  5. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  6. Young in Mind.

    ERIC Educational Resources Information Center

    Miller, Mary K.

    1998-01-01

    According to experts, relatively minor memory lapses such as searching for misplaced car keys are perfectly normal but there are things that people can do to keep their minds agile and their memories alive. Discusses lifestyles and techniques used by seniors dedicated to exercising their minds. Also explores the science of memory and aging. (PVD)

  7. A Teacher's Guide to Memory Techniques.

    ERIC Educational Resources Information Center

    Hodges, Daniel L.

    1982-01-01

    To aid instructors in teaching their students to use effective methods of memorization, this article outlines major memory methods, provides examples of their use, evaluates the methods, and discusses ways students can be taught to apply them. First, common, but less effective, memory methods are presented, including reading and re-reading…

  8. Displays, memories, and signal processing: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Articles on electronics systems and techniques were presented. The first section is on displays and other electro-optical systems; the second section is devoted to signal processing. The third section presented several new memory devices for digital equipment, including articles on holographic memories. The latest patent information available is also given.

  9. Improving family medicine resident training in dementia care: an experiential learning opportunity in Primary Care Collaborative Memory Clinics.

    PubMed

    Lee, Linda; Weston, W Wayne; Hillier, Loretta; Archibald, Douglas; Lee, Joseph

    2018-06-21

    Family physicians often find themselves inadequately prepared to manage dementia. This article describes the curriculum for a resident training intervention in Primary Care Collaborative Memory Clinics (PCCMC), outlines its underlying educational principles, and examines its impact on residents' ability to provide dementia care. PCCMCs are family physician-led interprofessional clinic teams that provide evidence-informed comprehensive assessment and management of memory concerns. Within PCCMCs residents learn to apply a structured approach to assessment, diagnosis, and management; training consists of a tutorial covering various topics related to dementia followed by work-based learning within the clinic. Significantly more residents who trained in PCCMCs (sample = 98), as compared to those in usual training programs (sample = 35), reported positive changes in knowledge, ability, and confidence in ability to assess and manage memory problems. The PCCMC training intervention for family medicine residents provides a significant opportunity for residents to learn about best clinical practices and interprofessional care needed for optimal dementia care integrated within primary care practice.

  10. An Investigation of Unified Memory Access Performance in CUDA

    PubMed Central

    Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin

    2015-01-01

    Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668

  11. Locality Aware Concurrent Start for Stencil Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.

    Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less

  12. Shape memory alloy/shape memory polymer tools

    DOEpatents

    Seward, Kirk P.; Krulevitch, Peter A.

    2005-03-29

    Micro-electromechanical tools for minimally invasive techniques including microsurgery. These tools utilize composite shape memory alloy (SMA), shape memory polymer (SMP) and combinations of SMA and SMP to produce catheter distal tips, actuators, etc., which are bistable. Applications for these structures include: 1) a method for reversible fine positioning of a catheter tip, 2) a method for reversible fine positioning of tools or therapeutic catheters by a guide catheter, 3) a method for bending articulation through the body's vasculature, 4) methods for controlled stent delivery, deployment, and repositioning, and 5) catheters with variable modulus, with vibration mode, with inchworm capability, and with articulated tips. These actuators and catheter tips are bistable and are opportune for in vivo usage because the materials are biocompatible and convenient for intravascular use as well as other minimal by invasive techniques.

  13. The synaptic plasticity and memory hypothesis: encoding, storage and persistence

    PubMed Central

    Takeuchi, Tomonori; Duszkiewicz, Adrian J.; Morris, Richard G. M.

    2014-01-01

    The synaptic plasticity and memory hypothesis asserts that activity-dependent synaptic plasticity is induced at appropriate synapses during memory formation and is both necessary and sufficient for the encoding and trace storage of the type of memory mediated by the brain area in which it is observed. Criteria for establishing the necessity and sufficiency of such plasticity in mediating trace storage have been identified and are here reviewed in relation to new work using some of the diverse techniques of contemporary neuroscience. Evidence derived using optical imaging, molecular-genetic and optogenetic techniques in conjunction with appropriate behavioural analyses continues to offer support for the idea that changing the strength of connections between neurons is one of the major mechanisms by which engrams are stored in the brain. PMID:24298167

  14. Electronic implementation of associative memory based on neural network models

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Lambe, John; Thakoor, A. P.

    1987-01-01

    An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.

  15. Noise reduction in optically controlled quantum memory

    NASA Astrophysics Data System (ADS)

    Ma, Lijun; Slattery, Oliver; Tang, Xiao

    2018-05-01

    Quantum memory is an essential tool for quantum communications systems and quantum computers. An important category of quantum memory, called optically controlled quantum memory, uses a strong classical beam to control the storage and re-emission of a single-photon signal through an atomic ensemble. In this type of memory, the residual light from the strong classical control beam can cause severe noise and degrade the system performance significantly. Efficiently suppressing this noise is a requirement for the successful implementation of optically controlled quantum memories. In this paper, we briefly introduce the latest and most common approaches to quantum memory and review the various noise-reduction techniques used in implementing them.

  16. Declarative and nondeclarative memory: multiple brain systems supporting learning and memory.

    PubMed

    Squire, L R

    1992-01-01

    Abstract The topic of multiple forms of memory is considered from a biological point of view. Fact-and-event (declarative, explicit) memory is contrasted with a collection of non conscious (non-declarative, implicit) memory abilities including skills and habits, priming, and simple conditioning. Recent evidence is reviewed indicating that declarative and non declarative forms of memory have different operating characteristics and depend on separate brain systems. A brain-systems framework for understanding memory phenomena is developed in light of lesion studies involving rats, monkeys, and humans, as well as recent studies with normal humans using the divided visual field technique, event-related potentials, and positron emission tomography (PET).

  17. Community-based memorials to September 11, 2001: environmental stewardship as memory work

    Treesearch

    Erika S. Svendsen; Lindsay K. Campbell

    2014-01-01

    This chapter investigates how people use trees, parks, gardens, and other natural resources as raw materials in and settings for memorials to September 11, 2001. In particular, we focus on 'found space living memorials', which we define as sites that are community-managed, re-appropriated from their prior use, often carved out of the public right-of-way, and...

  18. Memory-assisted quantum key distribution resilient against multiple-excitation effects

    NASA Astrophysics Data System (ADS)

    Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen

    2018-01-01

    Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.

  19. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  20. Individual Differences in the Effects of Retrieval from Long-Term Memory

    ERIC Educational Resources Information Center

    Brewer, Gene A.; Unsworth, Nash

    2012-01-01

    The current study examined individual differences in the effects of retrieval from long-term memory (i.e., the testing effect). The effects of retrieving from memory make tested information more accessible for future retrieval attempts. Despite the broad applied ramifications of such a potent memorization technique there is a paucity of research…

  1. Implementation of a finite element analysis procedure for structural analysis of shape memory behaviour of fibre reinforced shape memory polymer composites

    NASA Astrophysics Data System (ADS)

    Azzawi, Wessam Al; Epaarachchi, J. A.; Islam, Mainul; Leng, Jinsong

    2017-12-01

    Shape memory polymers (SMPs) offer a unique ability to undergo a substantial shape deformation and subsequently recover the original shape when exposed to a particular external stimulus. Comparatively low mechanical properties being the major drawback for extended use of SMPs in engineering applications. However the inclusion of reinforcing fibres in to SMPs improves mechanical properties significantly while retaining intrinsic shape memory effects. The implementation of shape memory polymer composites (SMPCs) in any engineering application is a unique task which requires profound materials and design optimization. However currently available analytical tools have critical limitations to undertake accurate analysis/simulations of SMPC structures and slower derestrict transformation of breakthrough research outcomes to real-life applications. Many finite element (FE) models have been presented. But majority of them require a complicated user-subroutines to integrate with standard FE software packages. Furthermore, those subroutines are problem specific and difficult to use for a wider range of SMPC materials and related structures. This paper presents a FE simulation technique to model the thermomechanical behaviour of the SMPCs using commercial FE software ABAQUS. Proposed technique incorporates material time-dependent viscoelastic behaviour. The ability of the proposed technique to predict the shape fixity and shape recovery was evaluated by experimental data acquired by a bending of a SMPC cantilever beam. The excellent correlation between the experimental and FE simulation results has confirmed the robustness of the proposed technique.

  2. Remotely Monitored Sealing Array Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-12

    The Remotely Monitored Sealing Array (RMSA) utilizes the Secure Sensor Platform (SSP) framework to establish the fundamental operating capabilities for communication, security, power management, and cryptography. In addition to the SSP framework the RMSA software has unique capabilities to support monitoring a fiber optic seal. Fiber monitoring includes open and closed as well as parametric monitoring to detect tampering attacks. The fiber monitoring techniques, using the SSP power management processes, allow the seals to last for years while maintaining the security requirements of the monitoring application. The seal is enclosed in a tamper resistant housing with software to support activemore » tamper monitoring. New features include LED notification of fiber closure, the ability to retrieve the entire fiber optic history via translator command, separate memory storage for fiber optic events, and a more robust method for tracking and resending failed messages.« less

  3. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yier

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less

  4. Interference due to shared features between action plans is influenced by working memory span.

    PubMed

    Fournier, Lisa R; Behmer, Lawrence P; Stubblefield, Alexandra M

    2014-12-01

    In this study, we examined the interactions between the action plans that we hold in memory and the actions that we carry out, asking whether the interference due to shared features between action plans is due to selection demands imposed on working memory. Individuals with low and high working memory spans learned arbitrary motor actions in response to two different visual events (A and B), presented in a serial order. They planned a response to the first event (A) and while maintaining this action plan in memory they then executed a speeded response to the second event (B). Afterward, they executed the action plan for the first event (A) maintained in memory. Speeded responses to the second event (B) were delayed when it shared an action feature (feature overlap) with the first event (A), relative to when it did not (no feature overlap). The size of the feature-overlap delay was greater for low-span than for high-span participants. This indicates that interference due to overlapping action plans is greater when fewer working memory resources are available, suggesting that this interference is due to selection demands imposed on working memory. Thus, working memory plays an important role in managing current and upcoming action plans, at least for newly learned tasks. Also, managing multiple action plans is compromised in individuals who have low versus high working memory spans.

  5. Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.

    NASA Astrophysics Data System (ADS)

    Sasaki, Hironori

    This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.

  6. Robust dynamical decoupling for quantum computing and quantum memory.

    PubMed

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  7. Parallel performance investigations of an unstructured mesh Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    2000-01-01

    A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.

  8. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    PubMed Central

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  9. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classesmore » is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.« less

  10. Jordan recurrent neural network versus IHACRES in modelling daily streamflows

    NASA Astrophysics Data System (ADS)

    Carcano, Elena Carla; Bartolini, Paolo; Muselli, Marco; Piroddi, Luigi

    2008-12-01

    SummaryA study of possible scenarios for modelling streamflow data from daily time series, using artificial neural networks (ANNs), is presented. Particular emphasis is devoted to the reconstruction of drought periods where water resource management and control are most critical. This paper considers two connectionist models: a feedforward multilayer perceptron (MLP) and a Jordan recurrent neural network (JNN), comparing network performance on real world data from two small catchments (192 and 69 km 2 in size) with irregular and torrential regimes. Several network configurations are tested to ensure a good combination of input features (rainfall and previous streamflow data) that capture the variability of the physical processes at work. Tapped delayed line (TDL) and memory effect techniques are introduced to recognize and reproduce temporal dependence. Results show a poor agreement when using TDL only, but a remarkable improvement can be obtained with JNN and its memory effect procedures, which are able to reproduce the system memory over a catchment in a more effective way. Furthermore, the IHACRES conceptual model, which relies on both rainfall and temperature input data, is introduced for comparative study. The results suggest that when good input data is unavailable, metric models perform better than conceptual ones and, in general, it is difficult to justify substantial conceptualization of complex processes.

  11. Weather prediction using a genetic memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanaerva's sparse distributed memory (SDM) is an associative memory model based on the mathematical properties of high dimensional binary address spaces. Holland's genetic algorithms are a search technique for high dimensional spaces inspired by evolutional processes of DNA. Genetic Memory is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. This architecture is designed to maximize the ability of the system to scale-up to handle real world problems.

  12. Static power reduction for midpoint-terminated busses

    DOEpatents

    Coteus, Paul W [Yorktown Heights, NY; Takken, Todd [Brewster, NY

    2011-01-18

    A memory system is disclosed which is comprised of a memory controller and addressable memory devices such as DRAMs. The invention provides a programmable register to control the high vs. low drive state of each bit of a memory system address and control bus during periods of bus inactivity. In this way, termination voltage supply current can be minimized, while permitting selected bus bits to be driven to a required state. This minimizes termination power dissipation while not affecting memory system performance. The technique can be extended to work for other high-speed busses as well.

  13. Distributed Memory Parallel Computing with SEAWAT

    NASA Astrophysics Data System (ADS)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources. Speed-ups up to 40 were obtained with the new PKS solver.

  14. Fabrication of overlaid nanopattern arrays for plasmon memory

    NASA Astrophysics Data System (ADS)

    Okabe, Takao; Wadayama, Hisahiro; Taniguchi, Jun

    2018-01-01

    Stacking technique of nanopattern array is gathering attention to fabricate next generation data storage such as plasmon memory. This technique provides multi- overlaid nanopatterns which made by nanoimprint lithography. In the structure, several metal nanopatterned layer and resin layer as a spacer are overlaid alternately. The horizontal position of nanopatterns to under nanopatterns and thickness of resin layer as spacer should be controlled accurately, because these parameters affect reading performance and capacity of plasmon memory. In this study, we developed new alignment mark to fabricate multi- overlaid nanopatterns. The alignment accuracy with the order of 300 nm was demonstrated for Ag nanopatterns in 2 layers. The alignment mark can measure the thickness of spacer. The relationship of spacer thickness and position of scale bar on the alignment mark was measured. The usefulness of the alignment mark for highdensity plasmon memory is shown.

  15. 76 FR 24409 - Proposed Amendment of Class E Airspace; Ava, MO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ...) at Bill Martin Memorial Airport, Ava, MO, has made this action necessary for the safety and management of Instrument Flight Rules (IFR) operations at Bill Martin Memorial Airport. DATES: Comments must... from 700 feet above the surface for standard instrument approach procedures at Bill Martin Memorial...

  16. Rocket Engine Health Management: Early Definition of Critical Flight Measurements

    NASA Technical Reports Server (NTRS)

    Christenson, Rick L.; Nelson, Michael A.; Butas, John P.

    2003-01-01

    The NASA led Space Launch Initiative (SLI) program has established key requirements related to safety, reliability, launch availability and operations cost to be met by the next generation of reusable launch vehicles. Key to meeting these requirements will be an integrated vehicle health management ( M) system that includes sensors, harnesses, software, memory, and processors. Such a system must be integrated across all the vehicle subsystems and meet component, subsystem, and system requirements relative to fault detection, fault isolation, and false alarm rate. The purpose of this activity is to evolve techniques for defining critical flight engine system measurements-early within the definition of an engine health management system (EHMS). Two approaches, performance-based and failure mode-based, are integrated to provide a proposed set of measurements to be collected. This integrated approach is applied to MSFC s MC-1 engine. Early identification of measurements supports early identification of candidate sensor systems whose design and impacts to the engine components must be considered in engine design.

  17. When Kids Act Out: A Comparison of Embodied Methods to Improve Children's Memory for a Story

    ERIC Educational Resources Information Center

    Berenhaus, Molly; Oakhill, Jane; Rusted, Jennifer

    2015-01-01

    Over the last decade, embodied cognition, the idea that sensorimotor processes facilitate higher cognitive processes, has proven useful for improving children's memory for a story. In order to compare the benefits of two embodiment techniques, active experiencing (AE) and indexing, for children's memory for a story, we compared the immediate…

  18. Review of the Literature Regarding Early Memories and Their Emerging Use in Projective Spiritual Assessment.

    ERIC Educational Resources Information Center

    Bustrum, Joy M.

    This doctoral research seeks to demonstrate the clinical utility of early memories by reviewing the current literature and providing a rationale for extending this research into the spiritual arena by highlighting the lack of available projective spiritual measures. Specific areas covered include an overview of early memory theory, technique and…

  19. Digital and optical shape representation and pattern recognition; Proceedings of the Meeting, Orlando, FL, Apr. 4-6, 1988

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Editor)

    1988-01-01

    The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.

  20. Rambrain - a library for virtually extending physical memory

    NASA Astrophysics Data System (ADS)

    Imgrund, Maximilian; Arth, Alexander

    2017-08-01

    We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.

  1. Formal verification of a set of memory management units

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.

    1992-01-01

    This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.

  2. Optical mass memory system (AMM-13). AMM/DBMS interface control document

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.

  3. Architecture of security management unit for safe hosting of multiple agents

    NASA Astrophysics Data System (ADS)

    Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques

    1999-04-01

    In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.

  4. Tracking the fear engram: the lateral amygdala is an essential locus of fear memory storage.

    PubMed

    Schafe, Glenn E; Doyère, Valérie; LeDoux, Joseph E

    2005-10-26

    Although it is believed that different types of memories are localized in discreet regions of the brain, concrete experimental evidence of the existence of such engrams is often elusive. Despite being one of the best characterized memory systems of the brain, the question of where fear memories are localized in the brain remains a hotly debated issue. Here, we combine site-specific behavioral pharmacology with multisite electrophysiological recording techniques to show that the lateral nucleus of the amygdala, long thought to be critical for the acquisition of fear memories, is also an essential locus of fear memory storage.

  5. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  6. SEU hardened memory cells for a CCSDS Reed Solomon encoder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, S.; Canaris, J.; Liu, K.

    This paper reports on design technique to harden CMOS memory circuits against Single Event Upset (SEU) in the space environment. The design technique provides a recovery mechanism which is independent of the shape of the upsetting event. A RAM cell and Flip Flop design are presented to demonstrate the method. The Flip Flop was used in the control circuitry for a Reed Solomon encoder designed for the Space Station and Explorer platforms.

  7. Military Nutrition Research: Eight Tasks to Address Medical Factors Limiting Soldier Effectiveness

    DTIC Science & Technology

    2005-10-01

    a procedure and staging area; b) a new transgenic laboratory with seven adjacent animal rooms; c) seven additional animal rooms for other breeding ...Effect of Dietary Amino Acids on Spatial Memory in Rats Sleep deprived by the Flower -Pot Technique - Bradley Youngblood, David Elkins, Gennady Smagin...1997. 132. Youngblood BD, Zhou J, Smagin GN, Ryan DH, Harris RBS. Sleep deprivation by the " flower pot" technique and spatial reference memory

  8. Quantum memory with a controlled homogeneous splitting

    NASA Astrophysics Data System (ADS)

    Hétet, G.; Wilkowski, D.; Chanelière, T.

    2013-04-01

    We propose a quantum memory protocol where an input light field can be stored onto and released from a single ground state atomic ensemble by controlling dynamically the strength of an external static and homogeneous field. The technique relies on the adiabatic following of a polaritonic excitation onto a state for which the forward collective radiative emission is forbidden. The resemblance with the archetypal electromagnetically induced transparency is only formal because no ground state coherence-based slow-light propagation is considered here. As compared to the other grand category of protocols derived from the photon-echo technique, our approach only involves a homogeneous static field. We discuss two physical situations where the effect can be observed, and show that in the limit where the excited state lifetime is longer than the storage time; the protocols are perfectly efficient and noise free. We compare the technique with other quantum memories, and propose atomic systems where the experiment can be realized.

  9. Thermorheological characteristics and comparison of shape memory polymers fabricated by novel 3D printing technique

    NASA Astrophysics Data System (ADS)

    Hassan, Rizwan Ul; Jo, Soohwan; Seok, Jongwon

    The feasibility of fabrication of shape memory polymers (SMPs) was investigated using a customized 3-dimensional (3D) printing technique with an excellent resolution that could be less than 100 microns. The thermorheological effects of SMPs were adjusted by contact and non-contact triggering, which led to the respective excellent shape recoveries of 100% and 99.89%. Thermogravimetric analyses of SMPs resulted in a minor weight loss, thereby revealing good thermal stability at higher temperatures. The viscoelastic properties of SMPs were measured using dynamic mechanical analyses, exhibiting increased viscous and elastic characteristics. Mechanical strength, thermal stability and viscoelastic properties, of the two SMPs were compared [di(ethylene) glycol dimethacrylate (DEGDMA) and poly (ethylene glycol) dimethacrylate (PEGDMA)] to investigate the shape memory behavior. This novel 3D printing technique can be used as a promising method for fabricating smart materials with increased accuracy in a cost-effective manner.

  10. Application-Controlled Demand Paging for Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)

    1997-01-01

    In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.

  11. Analysis of memory use for improved design and compile-time allocation of local memory

    NASA Technical Reports Server (NTRS)

    Mcniven, Geoffrey D.; Davidson, Edward S.

    1986-01-01

    Trace analysis techniques are used to study memory referencing behavior for the purpose of designing local memories and determining how to allocate them for data and instructions. In an attempt to assess the inherent behavior of the source code, the trace analysis system described here reduced the effects of the compiler and host architecture on the trace by using a technical called flattening. The variables in the trace, their associated single-assignment values, and references are histogrammed on the basis of various parameters describing memory referencing behavior. Bounds are developed specifying the amount of memory space required to store all live values in a particular histogram class. The reduction achieved in main memory traffic by allocating local memory is specified for each class.

  12. Single-pass memory system evaluation for multiprogramming workloads

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1990-01-01

    Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.

  13. No evidence that 'fast-mapping' benefits novel learning in healthy Older adults.

    PubMed

    Greve, Andrea; Cooper, Elisa; Henson, Richard N

    2014-07-01

    Much evidence suggests that the Hippocampus is necessary for learning novel associations. Contrary to this, Sharon, Moscovitch, and Gilboa (2011) reported four amnesic patients with Hippocampal damage who maintained the capacity to learn novel object-name associations when trained with a 'fast-mapping' (FM) technique. This technique therefore potentially offers an alternative route for learning novel information in populations experiencing memory problems. We examined this potential in healthy ageing, by comparing 24 Older and 24 Young participants who completed a FM procedure very similar to Sharon et al. (2011). As expected, the Older group showed worse memory than the Young group under standard explicit encoding (EE) instructions. However, the Older group continued to show worse performance under the FM procedure, with no evidence that FM alleviated their memory deficit. Indeed, performance was worse for the FM than EE condition in both groups. Structural MRI scans confirmed reduced Hippocampal grey-matter volume in the Older group, which correlated with memory performance across both groups and both EE/FM conditions. We conclude FM does not help memory problems that occur with normal ageing, and discuss theoretical implications for memory theories. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.

    PubMed

    Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos

    2015-01-01

    The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.

  15. Final Project Report: Data Locality Enhancement of Dynamic Simulations for Exascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Xipeng

    The goal of this project is to develop a set of techniques and software tools to enhance the matching between memory accesses in dynamic simulations and the prominent features of modern and future manycore systems, alleviating the memory performance issues for exascale computing. In the first three years, the PI and his group have achieves some significant progress towards the goal, producing a set of novel techniques for improving the memory performance and data locality in manycore systems, yielding 18 conference and workshop papers and 4 journal papers and graduating 6 Ph.Ds. This report summarizes the research results of thismore » project through that period.« less

  16. Application of holographic optical techniques to bulk memory.

    NASA Technical Reports Server (NTRS)

    Anderson, L. K.

    1971-01-01

    Current efforts to exploit the spatial redundancy and built-in imaging of holographic optical techniques to provide high information densities without critical alignment and tight mechanical tolerances are reviewed. Read-write-erase in situ operation is possible but is presently impractical because of limitations in available recording media. As these are overcome, it should prove feasible to build holographic bulk memories with mechanically replaceable hologram plates featuring very fast (less than 2 microsec) random access to large (greater than 100 million bit) data blocks and very high throughput (greater than 500 Mbit/sec). Using volume holographic storage it may eventually be possible to realize random-access mass memories which require no mechanical motion and yet provide very high capacity.

  17. Molecular dynamics simulations through GPU video games technologies

    PubMed Central

    Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia

    2016-01-01

    Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251

  18. A pilot study examining functional brain activity 6 months after memory retraining in MS: the MEMREHAB trial.

    PubMed

    Dobryakova, Ekaterina; Wylie, Glenn R; DeLuca, John; Chiaravalloti, Nancy D

    2014-09-01

    Cognitive impairment in individuals with multiple sclerosis (MS) is now well recognized. One of the most common cognitive deficits is found in memory functioning, largely due to impaired acquisition. We examined functional brain activity 6 months after memory retraining in individuals with MS. The current report presents long term follow-up results from a randomized clinical trial on a memory rehabilitation protocol known as the modified Story Memory Technique. Behavioral memory performance and brain activity of all participants were evaluated at baseline, immediately after treatment, and 6 months after treatment. Results revealed that previously observed increases in patterns of cerebral activation during learning immediately after memory training were maintained 6 months post training.

  19. Improved memory loading techniques for the TSRV display system

    NASA Technical Reports Server (NTRS)

    Easley, W. C.; Lynn, W. A.; Mcluer, D. G.

    1986-01-01

    A recent upgrade of the TSRV research flight system at NASA Langley Research Center retained the original monochrome display system. However, the display memory loading equipment was replaced requiring design and development of new methods of performing this task. This paper describes the new techniques developed to load memory in the display system. An outdated paper tape method for loading the BOOTSTRAP control program was replaced by EPROM storage of the characters contained on the tape. Rather than move a tape past an optical reader, a counter was implemented which steps sequentially through EPROM addresses and presents the same data to the loader circuitry. A cumbersome cassette tape method for loading the applications software was replaced with a floppy disk method using a microprocessor terminal installed as part of the upgrade. The cassette memory image was transferred to disk and a specific software loader was written for the terminal which duplicates the function of the cassette loader.

  20. Reed Solomon codes for error control in byte organized computer memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  1. CMOS imager for pointing and tracking applications

    NASA Technical Reports Server (NTRS)

    Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)

    2006-01-01

    Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.

  2. Coherent Spin Control at the Quantum Level in an Ensemble-Based Optical Memory.

    PubMed

    Jobez, Pierre; Laplane, Cyril; Timoney, Nuala; Gisin, Nicolas; Ferrier, Alban; Goldner, Philippe; Afzelius, Mikael

    2015-06-12

    Long-lived quantum memories are essential components of a long-standing goal of remote distribution of entanglement in quantum networks. These can be realized by storing the quantum states of light as single-spin excitations in atomic ensembles. However, spin states are often subjected to different dephasing processes that limit the storage time, which in principle could be overcome using spin-echo techniques. Theoretical studies suggest this to be challenging due to unavoidable spontaneous emission noise in ensemble-based quantum memories. Here, we demonstrate spin-echo manipulation of a mean spin excitation of 1 in a large solid-state ensemble, generated through storage of a weak optical pulse. After a storage time of about 1 ms we optically read-out the spin excitation with a high signal-to-noise ratio. Our results pave the way for long-duration optical quantum storage using spin-echo techniques for any ensemble-based memory.

  3. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  4. High-performance Raman memory with spatio-temporal reversal

    NASA Astrophysics Data System (ADS)

    Vernaz-Gris, Pierre; Tranter, Aaron D.; Everett, Jesse L.; Leung, Anthony C.; Paul, Karun V.; Campbell, Geoff T.; Lam, Ping Koy; Buchler, Ben C.

    2018-05-01

    A number of techniques exist to use an ensemble of atoms as a quantum memory for light. Many of these propose to use backward retrieval as a way to improve the storage and recall efficiency. We report on a demonstration of an off-resonant Raman memory that uses backward retrieval to achieve an efficiency of $65\\pm6\\%$ at a storage time of one pulse duration. The memory has a characteristic decay time of 60 $\\mu$s, corresponding to a delay-bandwidth product of $160$.

  5. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-04-24

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  6. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  7. Energy reduction through voltage scaling and lightweight checking

    NASA Astrophysics Data System (ADS)

    Kadric, Edin

    As the semiconductor roadmap reaches smaller feature sizes and the end of Dennard Scaling, design goals change, and managing the power envelope often dominates delay minimization. Voltage scaling remains a powerful tool to reduce energy. We find that it results in about 60% geomean energy reduction on top of other common low-energy optimizations with 22nm CMOS technology. However, when voltage is reduced, it becomes easier for noise and particle strikes to upset a node, potentially causing Silent Data Corruption (SDC). The 60% energy reduction, therefore, comes with a significant drop in reliability. Duplication with checking and triple-modular redundancy are traditional approaches used to combat transient errors, but spending 2--3x the energy for redundant computation can diminish or reverse the benefits of voltage scaling. As an alternative, we explore the opportunity to use checking operations that are cheaper than the base computation they are guarding. We devise a classification system for applications and their lightweight checking characteristics. In particular, we identify and evaluate the effectiveness of lightweight checks in a broad set of common tasks in scientific computing and signal processing. We find that the lightweight checks cost only a fraction of the base computation (0-25%) and allow us to recover the reliability losses from voltage scaling. Overall, we show about 50% net energy reduction without compromising reliability compared to operation at the nominal voltage. We use FPGAs (Field-Programmable Gate Arrays) in our work, although the same ideas can be applied to different systems. On top of voltage scaling, we explore other common low-energy techniques for FPGAs: transmission gates, gate boosting, power gating, low-leakage (high-Vth) processes, and dual-V dd architectures. We do not scale voltage for memories, so lower voltages help us reduce logic and interconnect energy, but not memory energy. At lower voltages, memories become dominant, and we get diminishing returns from continuing to scale voltage. To ensure that memories do not become a bottleneck, we also design an energy-robust FPGA memory architecture, which attempts to minimize communication energy due to mismatches between application and architecture. We do this alongside application parallelism tuning. We show our techniques on a wide range of applications, including a large real-time system used for Wide-Area Motion Imaging (WAMI).

  8. Neural systems and time course of proactive interference in working memory.

    PubMed

    Du, Yingchun; Zhang, John X; Xiao, Zhuangwei; Wu, Renhua

    2007-01-01

    The storage of information in working memory suffers as a function of proactive interference. Many works using neuroimaging technique have been done to reveal the brain mechanism of interference resolution. However, less is yet known about the time course of this process. Event-related potential method(ERP) and standardized Low Resolution Brain Electromagnetic Tomography method (sLORETA) were used in this study to discover the time course of interference resolution in working memory. The anterior P2 was thought to reflect interference resolution and if so, this process occurred earlier in working memory than in long-term memory.

  9. Enhance, delete, incept: Manipulating hippocampus-dependent memories☆

    PubMed Central

    Spiers, Hugo J.; Bendor, Daniel

    2014-01-01

    Here we provide a brief overview of recent research on memory manipulation. We focus primarily on memories for which the hippocampus is thought to be required due to its central importance in the study of memory. The repertoire of methods employed is expanding and includes optogenetics, transcranial stimulation, deep brain stimulation, cued reactivation during sleep and the use of pharmacological agents. In addition, the possible mechanisms underlying these memory changes have been investigated using techniques such as single unit recording and functional magnetic resonance imaging (fMRI). This article is part of a Special Issue entitled ‘Memory enhancement’. PMID:24397964

  10. How Managers' everyday decisions create or destroy your company's strategy.

    PubMed

    Bower, Joseph L; Gilbert, Clark G

    2007-02-01

    Senior executives have long been frustrated by the disconnection between the plans and strategies they devise and the actual behavior of the managers throughout the company. This article approaches the problem from the ground up, recognizing that every time a manager allocates resources, that decision moves the company either into or out of alignment with its announced strategy. A well-known story--Intel's exit from the memory business--illustrates this point. When discussing what businesses Intel should be in, Andy Grove asked Gordon Moore what they would do if Intel were a company that they had just acquired. When Moore answered, "Get out of memory," they decided to do just that. It turned out, though, that Intel's revenues from memory were by this time only 4% of total sales. Intel's lower-level managers had already exited the business. What Intel hadn't done was to shut down the flow of research funding into memory (which was still eating up one-third of all research expenditures); nor had the company announced its exit to the outside world. Because divisional and operating managers-as well as customers and capital markets-have such a powerful impact on the realized strategy of the firm, senior management might consider focusing less on the company's formal strategy and more on the processes by which the company allocates resources. Top managers must know the track record of the people who are making resource allocation proposals; recognize the strategic issues at stake; reach down to operational managers to work across division lines; frame resource questions to reflect the corporate perspective, especially when large sums of money are involved and conditions are highly uncertain; and create a new context that allows top executives to circumvent the regular resource allocation process when necessary.

  11. Practical applications of remote sensing technology

    NASA Technical Reports Server (NTRS)

    Whitmore, Roy A., Jr.

    1990-01-01

    Land managers increasingly are becoming dependent upon remote sensing and automated analysis techniques for information gathering and synthesis. Remote sensing and geographic information system (GIS) techniques provide quick and economical information gathering for large areas. The outputs of remote sensing classification and analysis are most effective when combined with a total natural resources data base within the capabilities of a computerized GIS. Some examples are presented of the successes, as well as the problems, in integrating remote sensing and geographic information systems. The need to exploit remotely sensed data and the potential that geographic information systems offer for managing and analyzing such data continues to grow. New microcomputers with vastly enlarged memory, multi-fold increases in operating speed and storage capacity that was previously available only on mainframe computers are a reality. Improved raster GIS software systems have been developed for these high performance microcomputers. Vector GIS systems previously reserved for mini and mainframe systems are available to operate on these enhanced microcomputers. One of the more exciting areas that is beginning to emerge is the integration of both raster and vector formats on a single computer screen. This technology will allow satellite imagery or digital aerial photography to be presented as a background to a vector display.

  12. Nano-Localized Thermal Analysis and Mapping of Surface and Sub-Surface Thermal Properties Using Scanning Thermal Microscopy (SThM).

    PubMed

    Pereira, Maria J; Amaral, Joao S; Silva, Nuno J O; Amaral, Vitor S

    2016-12-01

    Determining and acting on thermo-physical properties at the nanoscale is essential for understanding/managing heat distribution in micro/nanostructured materials and miniaturized devices. Adequate thermal nano-characterization techniques are required to address thermal issues compromising device performance. Scanning thermal microscopy (SThM) is a probing and acting technique based on atomic force microscopy using a nano-probe designed to act as a thermometer and resistive heater, achieving high spatial resolution. Enabling direct observation and mapping of thermal properties such as thermal conductivity, SThM is becoming a powerful tool with a critical role in several fields, from material science to device thermal management. We present an overview of the different thermal probes, followed by the contribution of SThM in three currently significant research topics. First, in thermal conductivity contrast studies of graphene monolayers deposited on different substrates, SThM proves itself a reliable technique to clarify the intriguing thermal properties of graphene, which is considered an important contributor to improve the performance of downscaled devices and materials. Second, SThM's ability to perform sub-surface imaging is highlighted by thermal conductivity contrast analysis of polymeric composites. Finally, an approach to induce and study local structural transitions in ferromagnetic shape memory alloy Ni-Mn-Ga thin films using localized nano-thermal analysis is presented.

  13. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  14. Practical Verification & Safeguard Tools for C/C++

    DTIC Science & Technology

    2007-11-01

    735; RDDC Valcartier; novembre 2007. Ce document est le rapport final d’un projet de recherche qui a été mené en 2005-2006. Le but de ce projet... 13 2.8 On Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.9 Memory Management Problems... 13 2.9.1 Use of Freed Memory . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.9.2 Underallocated Memory for a

  15. Research on memory management in embedded systems

    NASA Astrophysics Data System (ADS)

    Huang, Xian-ying; Yang, Wu

    2005-12-01

    Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.

  16. Preventing messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A; Chen, Dong; Gooding, Thomas; Heidelberger, Philip; Parker, Jeff

    2014-01-14

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate and interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  17. Autobiographical memory, interpersonal problem solving, and suicidal behavior in adolescent inpatients.

    PubMed

    Arie, Miri; Apter, Alan; Orbach, Israel; Yefet, Yael; Zalsman, Gil; Zalzman, Gil

    2008-01-01

    The aim of the study was to test Williams' (Williams JMG. Depression and the specificity of autobiographical memory. In: Rubin D, ed. Remembering Our Past: Studies in Autobiographical Memory. London: Cambridge University Press; 1996:244-267.) theory of suicidal behavior in adolescents and young adults by examining the relationship among suicidal behaviors, defective ability to retrieve specific autobiographical memories, impaired interpersonal problem solving, negative life events, repression, and hopelessness. Twenty-five suicidal adolescent and young adult inpatients (16.5 y +/- 2.5) were compared with 25 nonsuicidal adolescent and young adult inpatients (16.5 y +/- 2.5) and 25 healthy controls. Autobiographical memory was tested by a word association test; problem solving by the means-ends problem solving technique; negative life events by the Coddington scale; repression by the Life Style Index; hopelessness by the Beck scale; suicidal risk by the Plutchik scale, and suicide attempt by clinical history. Impairment in the ability to produce specific autobiographical memories, difficulties with interpersonal problem solving, negative life events, and repression were all associated with hopelessness and suicidal behavior. There were significant correlations among all the variables except for repression and negative life events. These findings support Williams' notion that generalized autobiographical memory is associated with deficits in interpersonal problem solving, negative life events, hopelessness, and suicidal behavior. The finding that defects in autobiographical memory are associated with suicidal behavior in adolescents and young adults may lead to improvements in the techniques of cognitive behavioral therapy in this age group.

  18. CoNNeCT Baseband Processor Module

    NASA Technical Reports Server (NTRS)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  19. Detailed Design and Implementation of a Multiprogramming Operating System for Sixteen-Bit Microprocessors.

    DTIC Science & Technology

    1983-12-01

    4 Multiuser Support ...... .......... 11-5 User Interface . .. .. ................ .. 11- 7 Inter -user Communications ................ 11- 7 Memory...user will greatly help facilitate the learning process. Inter -User Communication The inter -user communications of the operating system can be done using... inter -user communications would be met by using one or both of them. AMemory and File Management Memory and file management is concerned with four basic

  20. Advanced Development of Certified OS Kernels

    DTIC Science & Technology

    2015-06-01

    It provides an infrastructure to map a physical page into multiple processes’ page maps in different address spaces. Their ownership mechanism ensures...of their shared memory infrastructure . Trap module The trap module specifies the behaviors of exception handlers and mCertiKOS system calls. In...layers), 1 pm for the shared memory infrastructure (3 layers), 3.5 pm for the thread management (10 layers), 1 pm for the process management (4 layers

  1. High-Performance, Radiation-Hardened Electronics for Space Environments

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Watson, Michael D.; Frazier, Donald O.; Adams, James H.; Johnson, Michael A.; Kolawa, Elizabeth A.

    2007-01-01

    The Radiation Hardened Electronics for Space Environments (RHESE) project endeavors to advance the current state-of-the-art in high-performance, radiation-hardened electronics and processors, ensuring successful performance of space systems required to operate within extreme radiation and temperature environments. Because RHESE is a project within the Exploration Technology Development Program (ETDP), RHESE's primary customers will be the human and robotic missions being developed by NASA's Exploration Systems Mission Directorate (ESMD) in partial fulfillment of the Vision for Space Exploration. Benefits are also anticipated for NASA's science missions to planetary and deep-space destinations. As a technology development effort, RHESE provides a broad-scoped, full spectrum of approaches to environmentally harden space electronics, including new materials, advanced design processes, reconfigurable hardware techniques, and software modeling of the radiation environment. The RHESE sub-project tasks are: SelfReconfigurable Electronics for Extreme Environments, Radiation Effects Predictive Modeling, Radiation Hardened Memory, Single Event Effects (SEE) Immune Reconfigurable Field Programmable Gate Array (FPGA) (SIRF), Radiation Hardening by Software, Radiation Hardened High Performance Processors (HPP), Reconfigurable Computing, Low Temperature Tolerant MEMS by Design, and Silicon-Germanium (SiGe) Integrated Electronics for Extreme Environments. These nine sub-project tasks are managed by technical leads as located across five different NASA field centers, including Ames Research Center, Goddard Space Flight Center, the Jet Propulsion Laboratory, Langley Research Center, and Marshall Space Flight Center. The overall RHESE integrated project management responsibility resides with NASA's Marshall Space Flight Center (MSFC). Initial technology development emphasis within RHESE focuses on the hardening of Field Programmable Gate Arrays (FPGA)s and Field Programmable Analog Arrays (FPAA)s for use in reconfigurable architectures. As these component/chip level technologies mature, the RHESE project emphasis shifts to focus on efforts encompassing total processor hardening techniques and board-level electronic reconfiguration techniques featuring spare and interface modularity. This phased approach to distributing emphasis between technology developments provides hardened FPGA/FPAAs for early mission infusion, then migrates to hardened, board-level, high speed processors with associated memory elements and high density storage for the longer duration missions encountered for Lunar Outpost and Mars Exploration occurring later in the Constellation schedule.

  2. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  3. Implementation of relational data base management systems on micro-computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less

  4. Genetic Dissection of Learning and Memory in Mice

    PubMed Central

    Mineur, Yann S.; Crusio, Wim E.; Sluyter, Frans

    2004-01-01

    In this minireview, we discuss different strategies to dissect genetically the keystones of learning and memory. First, we broadly sketch the neurogenetic analysis of complex traits in mice. We then discuss two general strategies to find genes affecting learning and memory: candidate gene studies and whole genome searches. Next, we briefly review more recently developed techniques, such as microarrays and RNA interference. In addition, we focus on gene-environment interactions and endophenotypes. All sections are illustrated with examples from the learning and memory field, including a table summarizing the latest information about genes that have been shown to have effects on learning and memory. PMID:15656270

  5. Memory as behavior: The importance of acquisition and remembering strategies

    PubMed Central

    Delaney, Peter F.; Austin, John

    1998-01-01

    The study of memory has traditionally been the province of cognitive psychology, which has postulated different memory systems that store memory traces to explain remembering. Behavioral psychologists have been unsuccessful at empirically identifying the behavior that occurs during remembering because so much of it occurs rapidly and covertly. In addition, behavior analysts have generally been disinterested in studying transient phenomena such as memory. As a result, the cognitive interpretation has been the only one that has made and tested useful predictions. Recent experimental evidence acquired while having participants “think aloud” suggests that a behavioral approach to memory may provide a superior account of memory performance and allow applied scientists to observe and modify memory-related behavior with well-known applied behavior-analytic techniques. We review evidence supporting and extending the interpretation of memory provided by Palmer (1991), who described memory in terms of precurrent behavior that occurs at the time of acquisition in preparation for problem solving that occurs at the time of remembering. ImagesFig. 1 PMID:22477129

  6. Discrete-time Quantum Walks via Interchange Framework and Memory in Quantum Evolution

    NASA Astrophysics Data System (ADS)

    Dimcovic, Zlatko

    One of the newer and rapidly developing approaches in quantum computing is based on "quantum walks," which are quantum processes on discrete space that evolve in either discrete or continuous time and are characterized by mixing of components at each step. The idea emerged in analogy with the classical random walks and stochastic techniques, but these unitary processes are very different even as they have intriguing similarities. This thesis is concerned with study of discrete-time quantum walks. The original motivation from classical Markov chains required for discrete-time quantum walks that one adds an auxiliary Hilbert space, unrelated to the one in which the system evolves, in order to be able to mix components in that space and then take the evolution steps accordingly (based on the state in that space). This additional, "coin," space is very often an internal degree of freedom like spin. We have introduced a general framework for construction of discrete-time quantum walks in a close analogy with the classical random walks with memory that is rather different from the standard "coin" approach. In this method there is no need to bring in a different degree of freedom, while the full state of the system is still described in the direct product of spaces (of states). The state can be thought of as an arrow pointing from the previous to the current site in the evolution, representing the one-step memory. The next step is then controlled by a single local operator assigned to each site in the space, acting quite like a scattering operator. This allows us to probe and solve some problems of interest that have not had successful approaches with "coined" walks. We construct and solve a walk on the binary tree, a structure of great interest but until our result without an explicit discrete time quantum walk, due to difficulties in managing coin spaces necessary in the standard approach. Beyond algorithmic interests, the model based on memory allows one to explore effects of history on the quantum evolution and the subtle emergence of classical features as "memory" is explicitly kept for additional steps. We construct and solve a walk with an additional correlation step, finding interesting new features. On the other hand, the fact that the evolution is driven entirely by a local operator, not involving additional spaces, enables us to choose the Fourier transform as an operator completely controlling the evolution. This in turn allows us to combine the quantum walk approach with Fourier transform based techniques, something decidedly not possible in classical computational physics. We are developing a formalism for building networks manageable by walks constructed with this framework, based on the surprising efficiency of our framework in discovering internals of a simple network that we so far solved. Finally, in line with our expectation that the field of quantum walks can take cues from the rich history of development of the classical stochastic techniques, we establish starting points for the work on non-Abelian quantum walks, with a particular quantum-walk analog of the classical "card shuffling," the walk on the permutation group. In summary, this thesis presents a new framework for construction of discrete time quantum walks, employing and exploring memoried nature of unitary evolution. It is applied to fully solving the problems of: A walk on the binary tree and exploration of the quantum-to-classical transition with increased correlation length (history). It is then used for simple network discovery, and to lay the groundwork for analysis of complex networks, based on combined power of efficient exploration of the Hilbert space (as a walk mixing components) and Fourier transformation (since we can choose this for the evolution operator). We hope to establish this as a general technique as its power would be unmatched by any approaches available in the classical computing. We also looked at the promising and challenging prospect of walks on non-Abelian structures by setting up the problem of "quantum card shuffling," a quantum walk on the permutation group. Relation to other work is thoroughly discussed throughout, along with examination of the context of our work and overviews of our current and future work.

  7. Memory for light as a quantum process.

    PubMed

    Lobino, M; Kupchak, C; Figueroa, E; Lvovsky, A I

    2009-05-22

    We report complete characterization of an optical memory based on electromagnetically induced transparency. We recover the superoperator associated with the memory, under two different working conditions, by means of a quantum process tomography technique that involves storage of coherent states and their characterization upon retrieval. In this way, we can predict the quantum state retrieved from the memory for any input, for example, the squeezed vacuum or the Fock state. We employ the acquired superoperator to verify the nonclassicality benchmark for the storage of a Gaussian distributed set of coherent states.

  8. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  9. Digital item for digital human memory--television commerce application: family tree albuming system

    NASA Astrophysics Data System (ADS)

    Song, Jaeil; Lee, Hyejoo; Hong, JinWoo

    2004-01-01

    Technical advance in creating, storing digital media in daily life enables computers to capture human life and remember it as people do. A critical point with digitizing human life is how to recall bits of experience that are associated by semantic information. This paper proposes a technique for structuring dynamic digital object based on MPEG-21 Digital Item (DI) in order to recall human"s memory and providing interactive TV service on family tree albuming system as one of its applications. DIs are a dynamically reconfigurable, uniquely identified, described by a descriptor language, logical unit for structuring relationship among multiple media resources. Digital Item Processing (DIP) provides the means to interact with DIs to remind context to user, with active properties where objects have executable properties. Each user can adapt DIs" active properties to tailor the behavior of DIs to match his/her own specific needs. DIs" technologies in Intellectual Property Management and Protection (IPMP) can be used for privacy protection. In the interaction between the social space and technological space, the internal dynamics of family life fits well sharing family albuming service via family television. Family albuming service can act as virtual communities builders for family members. As memory is shared between family members, multiple annotations (including active properties on contextual information) will be made with snowballing value.

  10. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  11. Improved importance sampling technique for efficient simulation of digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  12. Reimagining Reading: Creating a Classroom Culture That Embraces Independent Choice Reading

    ERIC Educational Resources Information Center

    Dickerson, Katie

    2015-01-01

    Many of us are plagued by negative memories of sustained silent reading. In some of these memories, we are the students, attempting to read a book that didn't hold our interest or trying to read over the din of our disengaged classmates. In other memories, we are the teachers, suffering through a ten-minute classroom management nightmare, deciding…

  13. Formal verification of an MMU and MMU cache

    NASA Technical Reports Server (NTRS)

    Schubert, E. T.

    1991-01-01

    We describe the formal verification of a hardware subsystem consisting of a memory management unit and a cache. These devices are verified independently and then shown to interact correctly when composed. The MMU authorizes memory requests and translates virtual addresses to real addresses. The cache improves performance by maintaining a LRU (least recently used) list from the memory resident segment table.

  14. An adaptive replacement algorithm for paged-memory computer systems.

    NASA Technical Reports Server (NTRS)

    Thorington, J. M., Jr.; Irwin, J. D.

    1972-01-01

    A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.

  15. Reprogrammable field programmable gate array with integrated system for mitigating effects of single event upsets

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2010-01-01

    An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.

  16. 76 FR 17110 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ... automated collection techniques or other forms of information technology. DATES: Consideration will be given... Expansion Memorial Park, Ballpark Village Field at Busch Stadium, and the Soldiers' Military Memorial Museum... Branch announces a proposed new public information collection and seeks public comment on the provisions...

  17. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  18. Does overgeneral autobiographical memory result from poor memory for task instructions?

    PubMed

    Yanes, Paula K; Roberts, John E; Carlos, Erica L

    2008-10-01

    Considerable previous research has shown that retrieval of overgeneral autobiographical memories (OGM) is elevated among individuals suffering from various emotional disorders and those with a history of trauma. Although previous theories suggest that OGM serves the function of regulating acute negative affect, it is also possible that OGM results from difficulties in keeping the instruction set for the Autobiographical Memory Test (AMT) in working memory, or what has been coined "secondary goal neglect" (Dalgleish, 2004). The present study tested whether OGM is associated with poor memory for the task's instruction set, and whether an instruction set reminder would improve memory specificity over repeated trials. Multilevel modelling data-analytic techniques demonstrated a significant relationship between poor recall of instruction set and probability of retrieving OGMs. Providing an instruction set reminder for the AMT relative to a control task's instruction set improved memory specificity immediately afterward.

  19. Tonic Inhibitory Control of Dentate Gyrus Granule Cells by α5-Containing GABAA Receptors Reduces Memory Interference.

    PubMed

    Engin, Elif; Zarnowska, Ewa D; Benke, Dietmar; Tsvetkov, Evgeny; Sigal, Maksim; Keist, Ruth; Bolshakov, Vadim Y; Pearce, Robert A; Rudolph, Uwe

    2015-10-07

    Interference between similar or overlapping memories formed at different times poses an important challenge on the hippocampal declarative memory system. Difficulties in managing interference are at the core of disabling cognitive deficits in neuropsychiatric disorders. Computational models have suggested that, in the normal brain, the sparse activation of the dentate gyrus granule cells maintained by tonic inhibitory control enables pattern separation, an orthogonalization process that allows distinct representations of memories despite interference. To test this mechanistic hypothesis, we generated mice with significantly reduced expression of the α5-containing GABAA (α5-GABAARs) receptors selectively in the granule cells of the dentate gyrus (α5DGKO mice). α5DGKO mice had reduced tonic inhibition of the granule cells without any change in fast phasic inhibition and showed increased activation in the dentate gyrus when presented with novel stimuli. α5DGKO mice showed impairments in cognitive tasks characterized by high interference, without any deficiencies in low-interference tasks, suggesting specific impairment of pattern separation. Reduction of fast phasic inhibition in the dentate gyrus through granule cell-selective knock-out of α2-GABAARs or the knock-out of the α5-GABAARs in the downstream CA3 area did not detract from pattern separation abilities, which confirms the anatomical and molecular specificity of the findings. In addition to lending empirical support to computational hypotheses, our findings have implications for the treatment of interference-related cognitive symptoms in neuropsychiatric disorders, particularly considering the availability of pharmacological agents selectively targeting α5-GABAARs. Interference between similar memories poses a significant limitation on the hippocampal declarative memory system, and impaired interference management is a cognitive symptom in many disorders. Thus, understanding mechanisms of successful interference management or processes that can lead to interference-related memory problems has high theoretical and translational importance. This study provides empirical evidence that tonic inhibition in the dentate gyrus (DG), which maintains sparseness of neuronal activation in the DG, is essential for management of interference. The specificity of findings to tonic, but not faster, more transient types of neuronal inhibition and to the DG, but not the neighboring brain areas, is presented through control experiments. Thus, the findings link interference management to a specific mechanism, proposed previously by computational models. Copyright © 2015 the authors 0270-6474/15/3513699-15$15.00/0.

  20. [Early onset scoliosis. What are the options?].

    PubMed

    Farrington, D M; Tatay-Díaz, A

    2013-01-01

    The prognosis of children with progressive early onset scoliosis has improved considerably due to recent advances in surgical and non-surgical techniques and the understanding of the importance of preserving the thoracic space. Improvements in existing techniques and development of new methods have considerably improved the management of this condition. Derotational casting can be considered in children with documented progression of a <60° curve without previous surgical treatment. Both single and dual growing rods are effective, but the latter seem to offer better results. Hybrid constructs may be a better option in children who require a low-profile proximal anchor. The vertical expandable prosthetic titanium rib (VEPTR(®)) appears to be beneficial for patients with congenital scoliosis and fused ribs, and thoracic Insufficiency Syndrome. Children with medical comorbidities who may not tolerate repeated lengthenings should be considered for Shilla or Luque Trolley technique. Growth modulation using shape memory alloy staples or other tethers seem promising for mild curves, although more research is required to define their precise indications. Copyright © 2013 SECOT. Published by Elsevier Espana. All rights reserved.

  1. Primary Care-Based Memory Clinics: Expanding Capacity for Dementia Care.

    PubMed

    Lee, Linda; Hillier, Loretta M; Heckman, George; Gagnon, Micheline; Borrie, Michael J; Stolee, Paul; Harvey, David

    2014-09-01

    The implementation in Ontario of 15 primary-care-based interprofessional memory clinics represented a unique model of team-based case management aimed at increasing capacity for dementia care at the primary-care level. Each clinic tracked referrals; in a subset of clinics, charts were audited by geriatricians, clinic members were interviewed, and patients, caregivers, and referring physicians completed satisfaction surveys. Across all clinics, 582 patients were assessed, and 8.9 per cent were referred to a specialist. Patients and caregivers were very satisfied with the care received, as were referring family physicians, who reported increased capacity to manage dementia. Geriatricians' chart audits revealed a high level of agreement with diagnosis and management. This study demonstrated acceptability, feasibility, and preliminary effectiveness of the primary-care memory clinic model. Led by specially trained family physicians, it provided timely access to high-quality collaborative dementia care, impacting health service utilization by more-efficient use of scarce geriatric specialist resources.

  2. Siemens, Philips megaproject to yield superchip in 5 years

    NASA Astrophysics Data System (ADS)

    1985-02-01

    The development of computer chips using complementary metal oxide semiconductor (CMOS) memory technology is described. The management planning and marketing strategy of the Philips and Siemens corporations with regard to the memory chip are discussed.

  3. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  4. Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A [Rochester, MN; Chen, Dong [Croton On Hudson, NY; Gooding, Thomas [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Parker, Jeff [Rochester, MN

    2012-02-07

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  5. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  6. Mechanical design of a shape memory alloy actuated prosthetic hand.

    PubMed

    De Laurentis, Kathryn J; Mavroidis, Constantinos

    2002-01-01

    This paper presents the mechanical design for a new five fingered, twenty degree-of-freedom dexterous hand patterned after human anatomy and actuated by Shape Memory Alloy artificial muscles. Two experimental prototypes of a finger, one fabricated by traditional means and another fabricated by rapid prototyping techniques, are described and used to evaluate the design. An important aspect of the Rapid Prototype technique used here is that this multi-articulated hand will be fabricated in one step, without requiring assembly, while maintaining its desired mobility. The use of Shape Memory Alloy actuators combined with the rapid fabrication of the non-assembly type hand, reduce considerably its weight and fabrication time. Therefore, the focus of this paper is the mechanical design of a dexterous hand that combines Rapid Prototype techniques and smart actuators. The type of robotic hand described in this paper can be utilized for applications requiring low weight, compactness, and dexterity such as prosthetic devices, space and planetary exploration.

  7. A class of temporal boundaries derived by quantifying the sense of separation.

    PubMed

    Paine, Llewyn Elise; Gilden, David L

    2013-12-01

    The perception of moment-to-moment environmental flux as being composed of meaningful events requires that memory processes coordinate with cues that signify beginnings and endings. We have constructed a technique that allows this coordination to be monitored indirectly. This technique works by embedding a sequential priming task into the event under study. Memory and perception must be coordinated to resolve temporal flux into scenes. The implicit memory processes inherent in sequential priming are able to effectively shadow then mirror scene-forming processes. Certain temporal boundaries are found to weaken the strength of irrelevant feature priming, a signal which can then be used in more ambiguous cases to infer how people segment time. Over the course of 13 independent studies, we were able to calibrate the technique and then use it to measure the strength of event segmentation in several instructive contexts that involved both visual and auditory modalities. The signal generated by sequential priming may permit the sense of separation between events to be measured as an extensive psychophysical quantity.

  8. Non-volatile memory based on the ferroelectric photovoltaic effect

    PubMed Central

    Guo, Rui; You, Lu; Zhou, Yang; Shiuh Lim, Zhi; Zou, Xi; Chen, Lang; Ramesh, R.; Wang, Junling

    2013-01-01

    The quest for a solid state universal memory with high-storage density, high read/write speed, random access and non-volatility has triggered intense research into new materials and novel device architectures. Though the non-volatile memory market is dominated by flash memory now, it has very low operation speed with ~10 μs programming and ~10 ms erasing time. Furthermore, it can only withstand ~105 rewriting cycles, which prevents it from becoming the universal memory. Here we demonstrate that the significant photovoltaic effect of a ferroelectric material, such as BiFeO3 with a band gap in the visible range, can be used to sense the polarization direction non-destructively in a ferroelectric memory. A prototype 16-cell memory based on the cross-bar architecture has been prepared and tested, demonstrating the feasibility of this technique. PMID:23756366

  9. Evolutionary Metal Oxide Clusters for Novel Applications: Toward High-Density Data Storage in Nonvolatile Memories.

    PubMed

    Chen, Xiaoli; Zhou, Ye; Roy, Vellaisamy A L; Han, Su-Ting

    2018-01-01

    Because of current fabrication limitations, miniaturizing nonvolatile memory devices for managing the explosive increase in big data is challenging. Molecular memories constitute a promising candidate for next-generation memories because their properties can be readily modulated through chemical synthesis. Moreover, these memories can be fabricated through mild solution processing, which can be easily scaled up. Among the various materials, polyoxometalate (POM) molecules have attracted considerable attention for use as novel data-storage nodes for nonvolatile memories. Here, an overview of recent advances in the development of POMs for nonvolatile memories is presented. The general background knowledge of the structure and property diversity of POMs is also summarized. Finally, the challenges and perspectives in the application of POMs in memories are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  11. Pattern optimizing verification of self-align quadruple patterning

    NASA Astrophysics Data System (ADS)

    Yamato, Masatoshi; Yamada, Kazuki; Oyama, Kenichi; Hara, Arisa; Natori, Sakurako; Yamauchi, Shouhei; Koike, Kyohei; Yaegashi, Hidetami

    2017-03-01

    Lithographic scaling continues to advance by extending the life of 193nm immersion technology, and spacer-type multi-patterning is undeniably the driving force behind this trend. Multi-patterning techniques such as self-aligned double patterning (SADP) and self-aligned quadruple patterning (SAQP) have come to be used in memory devices, and they have also been adopted in logic devices to create constituent patterns in the formation of 1D layout designs. Multi-patterning has consequently become an indispensible technology in the fabrication of all advanced devices. In general, items that must be managed when using multi-patterning include critical dimension uniformity (CDU), line edge roughness (LER), and line width roughness (LWR). Recently, moreover, there has been increasing focus on judging and managing pattern resolution performance from a more detailed perspective and on making a right/wrong judgment from the perspective of edge placement error (EPE). To begin with, pattern resolution performance in spacer-type multi-patterning is affected by the process accuracy of the core (mandrel) pattern. Improving the controllability of CD and LER of the mandrel is most important, and to reduce LER, an appropriate smoothing technique should be carefully selected. In addition, the atomic layer deposition (ALD) technique is generally used to meet the need for high accuracy in forming the spacer film. Advances in scaling are accompanied by stricter requirements in the controllability of fine processing. In this paper, we first describe our efforts in improving controllability by selecting the most appropriate materials for the mandrel pattern and spacer film. Then, based on the materials selected, we present experimental results on a technique for improving etching selectivity.

  12. Functional Neuroanatomy of "Drosophila" Olfactory Memory Formation

    ERIC Educational Resources Information Center

    Guven-Ozkan, Tugba; Davis, Ronald L.

    2014-01-01

    New approaches, techniques and tools invented over the last decade and a half have revolutionized the functional dissection of neural circuitry underlying "Drosophila" learning. The new methodologies have been used aggressively by researchers attempting to answer three critical questions about olfactory memories formed with appetitive…

  13. Acute effects of triazolam on false recognition.

    PubMed

    Mintzer, M Z; Griffiths, R R

    2000-12-01

    Neuropsychological, neuroimaging, and electrophysiological techniques have been applied to the study of false recognition; however, psychopharmacological techniques have not been applied. Benzodiazepine sedative/anxiolytic drugs produce memory deficits similar to those observed in organic amnesia and may be useful tools for studying normal and abnormal memory mechanisms. The present double-blind, placebo-controlled repeated measures study examined the acute effects of orally administered triazolam (Halcion; 0.125 and 0.25 mg/70 kg), a benzodiazepine hypnotic, on performance in the Deese (1959)/Roediger-McDermott (1995) false recognition paradigm in 24 healthy volunteers. Paralleling previous demonstrations in amnesic patients, triazolam produced significant dose-related reductions in false recognition rates to nonstudied words associatively related to studied words, suggesting that false recognition relies on normal memory mechanisms impaired in benzodiazepine-induced amnesia. The results also suggested that relative to placebo, triazolam reduced participants' reliance on memory for item-specific versus list-common semantic information and reduced participants' use of remember versus know responses.

  14. Revisited Fisher's equation in a new outlook: A fractional derivative approach

    NASA Astrophysics Data System (ADS)

    Alquran, Marwan; Al-Khaled, Kamel; Sardar, Tridip; Chattopadhyay, Joydev

    2015-11-01

    The well-known Fisher equation with fractional derivative is considered to provide some characteristics of memory embedded into the system. The modified model is analyzed both analytically and numerically. A comparatively new technique residual power series method is used for finding approximate solutions of the modified Fisher model. A new technique combining Sinc-collocation and finite difference method is used for numerical study. The abundance of the bird species Phalacrocorax carbois considered as a test bed to validate the model outcome using estimated parameters. We conjecture non-diffusive and diffusive fractional Fisher equation represents the same dynamics in the interval (memory index, α ∈(0.8384 , 0.9986)). We also observe that when the value of memory index is close to zero, the solutions bifurcate and produce a wave-like pattern. We conclude that the survivability of the species increases for long range memory index. These findings are similar to Fisher observation and act in a similar fashion that advantageous genes do.

  15. The search for a hippocampal engram.

    PubMed

    Mayford, Mark

    2014-01-05

    Understanding the molecular and cellular changes that underlie memory, the engram, requires the identification, isolation and manipulation of the neurons involved. This presents a major difficulty for complex forms of memory, for example hippocampus-dependent declarative memory, where the participating neurons are likely to be sparse, anatomically distributed and unique to each individual brain and learning event. In this paper, I discuss several new approaches to this problem. In vivo calcium imaging techniques provide a means of assessing the activity patterns of large numbers of neurons over long periods of time with precise anatomical identification. This provides important insight into how the brain represents complex information and how this is altered with learning. The development of techniques for the genetic modification of neural ensembles based on their natural, sensory-evoked, activity along with optogenetics allows direct tests of the coding function of these ensembles. These approaches provide a new methodological framework in which to examine the mechanisms of complex forms of learning at the level of the neurons involved in a specific memory.

  16. The search for a hippocampal engram

    PubMed Central

    Mayford, Mark

    2014-01-01

    Understanding the molecular and cellular changes that underlie memory, the engram, requires the identification, isolation and manipulation of the neurons involved. This presents a major difficulty for complex forms of memory, for example hippocampus-dependent declarative memory, where the participating neurons are likely to be sparse, anatomically distributed and unique to each individual brain and learning event. In this paper, I discuss several new approaches to this problem. In vivo calcium imaging techniques provide a means of assessing the activity patterns of large numbers of neurons over long periods of time with precise anatomical identification. This provides important insight into how the brain represents complex information and how this is altered with learning. The development of techniques for the genetic modification of neural ensembles based on their natural, sensory-evoked, activity along with optogenetics allows direct tests of the coding function of these ensembles. These approaches provide a new methodological framework in which to examine the mechanisms of complex forms of learning at the level of the neurons involved in a specific memory. PMID:24298162

  17. Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.

    PubMed

    Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto

    2015-08-06

    Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.

  18. Neural correlates of true and false memory in mild cognitive impairment.

    PubMed

    Sweeney-Reed, Catherine M; Riddell, Patricia M; Ellis, Judi A; Freeman, Jayne E; Nasuto, Slawomir J

    2012-01-01

    The goal of this research was to investigate the changes in neural processing in mild cognitive impairment. We measured phase synchrony, amplitudes, and event-related potentials in veridical and false memory to determine whether these differed in participants with mild cognitive impairment compared with typical, age-matched controls. Empirical mode decomposition phase locking analysis was used to assess synchrony, which is the first time this analysis technique has been applied in a complex cognitive task such as memory processing. The technique allowed assessment of changes in frontal and parietal cortex connectivity over time during a memory task, without a priori selection of frequency ranges, which has been shown previously to influence synchrony detection. Phase synchrony differed significantly in its timing and degree between participant groups in the theta and alpha frequency ranges. Timing differences suggested greater dependence on gist memory in the presence of mild cognitive impairment. The group with mild cognitive impairment had significantly more frontal theta phase locking than the controls in the absence of a significant behavioural difference in the task, providing new evidence for compensatory processing in the former group. Both groups showed greater frontal phase locking during false than true memory, suggesting increased searching when no actual memory trace was found. Significant inter-group differences in frontal alpha phase locking provided support for a role for lower and upper alpha oscillations in memory processing. Finally, fronto-parietal interaction was significantly reduced in the group with mild cognitive impairment, supporting the notion that mild cognitive impairment could represent an early stage in Alzheimer's disease, which has been described as a 'disconnection syndrome'.

  19. Physicians' perceptions of capacity building for managing chronic disease in seniors using integrated interprofessional care models.

    PubMed

    Lee, Linda; Heckman, George; McKelvie, Robert; Jong, Philip; D'Elia, Teresa; Hillier, Loretta M

    2015-03-01

    To explore the barriers to and facilitators of adapting and expanding a primary care memory clinic model to integrate care of additional complex chronic geriatric conditions (heart failure, falls, chronic obstructive pulmonary disease, and frailty) into care processes with the goal of improving outcomes for seniors. Mixed-methods study using quantitative (questionnaires) and qualitative (interviews) methods. Ontario. Family physicians currently working in primary care memory clinic teams and supporting geriatric specialists. Family physicians currently working in memory clinic teams (n = 29) and supporting geriatric specialists(n = 9) were recruited as survey participants. Interviews were conducted with memory clinic lead physicians (n = 16).Statistical analysis was done to assess differences between family physician ratings and geriatric specialist ratings related to the capacity for managing complex chronic geriatric conditions, the role of interprofessional collaboration within primary care, and funding and staffing to support geriatric care. Results from both study methods were compared to identify common findings. Results indicate overall support for expanding the memory clinic model to integrate care for other complex conditions. However, the current primary care structure is challenged to support optimal management of patients with multiple comorbidities, particularly as related to limited funding and staffing resources. Structured training, interprofessional teams, and an active role of geriatric specialists within primary care were identified as important facilitators. The memory clinic model, as applied to other complex chronic geriatric conditions, has the potential to build capacity for high-quality primary care, improve health outcomes,promote efficient use of health care resources, and reduce healthcare costs.

  20. Impairing existing declarative memory in humans by disrupting reconsolidation

    PubMed Central

    Chan, Jason C. K.; LaPaglia, Jessica A.

    2013-01-01

    During the past decade, a large body of research has shown that memory traces can become labile upon retrieval and must be restabilized. Critically, interrupting this reconsolidation process can abolish a previously stable memory. Although a large number of studies have demonstrated this reconsolidation associated amnesia in nonhuman animals, the evidence for its occurrence in humans is far less compelling, especially with regard to declarative memory. In fact, reactivating a declarative memory often makes it more robust and less susceptible to subsequent disruptions. Here we show that existing declarative memories can be selectively impaired by using a noninvasive retrieval–relearning technique. In six experiments, we show that this reconsolidation-associated amnesia can be achieved 48 h after formation of the original memory, but only if relearning occurred soon after retrieval. Furthermore, the amnesic effect persists for at least 24 h, cannot be attributed solely to source confusion and is attainable only when relearning targets specific existing memories for impairment. These results demonstrate that human declarative memory can be selectively rewritten during reconsolidation. PMID:23690586

  1. Implementation and evaluation of shared-memory communication and synchronization operations in MPICH2 using the Nemesis communication subsystem.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntinas, D.; Mercier, G.; Gropp, W.

    2007-09-01

    This paper presents the implementation of MPICH2 over the Nemesis communication subsystem and the evaluation of its shared-memory performance. We describe design issues as well as some of the optimization techniques we employed. We conducted a performance evaluation over shared memory using microbenchmarks. The evaluation shows that MPICH2 Nemesis has very low communication overhead, making it suitable for smaller-grained applications.

  2. Studies and applications of NiTi shape memory alloys in the medical field in China.

    PubMed

    Dai, K; Chu, Y

    1996-01-01

    The biomedical study of NiTi shape memory alloys has been undertaken in China since 1978. A series of stimulating corrosion tests, histological observations, toxicity tests, carcinogenicity tests, trace nickel elements analysis and a number of clinical trials have been conducted. The results showed that the NiTi shape memory alloy is a good biomaterial with good biocompatibility and no obvious local tissue reaction, carcinogenesis or erosion of implants were found experimentally or clinically. In 1981, on the basis of fundamental studies, a shape memory staple was used for the first time inside the human body. Subsequently, various shape memory devices were designed and applied clinically for internal fixation of fractures, spine surgery, endoprostheses, gynaecological and craniofacial surgery. Since 1990, a series of internal stents have been developed for the management of biliary, tracheal and esophageal strictures and urethrostenosis as well as vascular obturator for tumour management. Several thousand cases have been treated and had a 1-10 year follow-up and good clinical results with a rather low complication rate were obtained.

  3. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  4. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  5. Things to come: postmodern digital knowledge management and medical informatics.

    PubMed

    Matheson, N W

    1995-01-01

    The overarching informatics grand challenge facing society is the creation of knowledge management systems that can acquire, conserve, organize, retrieve, display, and distribute what is known today in a manner that informs and educates, facilitates the discovery and creation of new knowledge, and contributes to the health and welfare of the planet. At one time the private, national, and university libraries of the world collectively constituted the memory of society's intellectual history. In the future, these new digital knowledge management systems will constitute human memory in its entirety. The current model of multiple local collections of duplicated resources will give way to specialized sole-source servers. In this new environment all scholarly scientific knowledge should be public domain knowledge: managed by scientists, organized for the advancement of knowledge, and readily available to all. Over the next decade, the challenge for the field of medical informatics and for the libraries that serve as the continuous memory for the biomedical sciences will be to come together to form a new organization that will lead to the development of postmodern digital knowledge management systems for medicine. These systems will form a portion of the evolving world brain of the 21st century.

  6. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  7. Analysis of power gating in different hierarchical levels of 2MB cache, considering variation

    NASA Astrophysics Data System (ADS)

    Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza

    2015-09-01

    This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.

  8. Massively parallel support for a case-based planning system

    NASA Technical Reports Server (NTRS)

    Kettler, Brian P.; Hendler, James A.; Anderson, William A.

    1993-01-01

    Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.

  9. Long memory and multifractality: A joint test

    NASA Astrophysics Data System (ADS)

    Goddard, John; Onali, Enrico

    2016-06-01

    The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. Among the nested model specifications that are investigated, in 11 out of 12 cases, daily returns are most appropriately characterized by a variant of the MMAR that applies a multifractal time-deformation process to NIID returns. There is no evidence of long memory.

  10. Does Reconsolidation Occur in Humans?

    PubMed Central

    Schiller, Daniela; Phelps, Elizabeth A.

    2011-01-01

    Evidence for reconsolidation in non-human animals has accumulated rapidly in the last decade, providing compelling` demonstration for this phenomenon across species and memory paradigms. In vast contrast, scant evidence exists for human reconsolidation to date. A major reason for this discrepancy is the invasive nature of current techniques used to investigate reconsolidation, which are difficult to apply in humans. Pharmacological blockade of reconsolidation, for example, has been typically used in animals as a proof of concept. However, most compounds used in these studies are toxic for humans, and those compounds that are safe target related, but not direct mechanisms of reconsolidation. Thus, although human reconsolidation has been hypothesized, there is limited evidence it actually exists. The best evidence for human reconsolidation emerges from non-invasive techniques that “update” memory during reconsolidation rather than block it, a technique only rarely used in animal research. Here we discuss the current state of human reconsolidation and the challenges ahead. We review findings on reconsolidation of emotional associative, episodic, and procedural memories, using invasive and non-invasive techniques. We discuss the possible interpretation of these results, attempt to reconcile some inconsistencies, and suggest a conceptual framework for future research. PMID:21629821

  11. Fault Tolerant Cache Schemes

    NASA Astrophysics Data System (ADS)

    Tu, H.-Yu.; Tasneem, Sarah

    Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.

  12. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  13. Interpixel crosstalk cancellation on holographic memory

    NASA Astrophysics Data System (ADS)

    Ishii, Toshiki; Fujimura, Ryushi

    2017-09-01

    In holographic memory systems, there have been no practical techniques to minimize interpixel crosstalk thus far. We developed an interpixel crosstalk cancellation technique using a checkerboard phase pattern with a phase difference of π/2, which can decrease the size of the spatial filter along the Fourier plane with the signal-to-noise ratio (SNR) kept high. This interpixel crosstalk cancellation technique is simple because it requires only one phase plate in the signal beam path. We verified the effect of such a cancellation technique by simulation. The improvement of SNR is maximized to 6.5 dB when the filter size specified in the Nyquist areal ratio is approximately 1.05 in ideal optical systems with no other fixed noise. The proposed technique can improve SNR by 0.85 in an assumed monocular architecture at an actual noise intensity. This improvement of SNR is very useful for realizing high-density recording or enhancing system robustness.

  14. Meliponiculture in Quilombola communities of Ipiranga and Gurugi, Paraíba state, Brazil: an ethnoecological approach

    PubMed Central

    2014-01-01

    Background The Quilombola communities of Ipiranga and Gurugi, located in Atlantic Rainforest in Southern of Paraíba state, have stories that are interwoven throughout time. The practice of meliponicultura has been carried out for generations in these social groups and provides an elaborate ecological knowledge based on native stingless bees, the melliferous flora and the management techniques used. The traditional knowledge that Quilombola have of stingless bees is of utmost importance for the establishment of conservation strategies for many species. Methods To deepen study concerning the ecological knowledge of the beekeepers, the method of participant observation together with structured and semi-structured interviews was used, as well as the collection of entomological and botanical categories of bees and plants mentioned. With the aim of recording the knowledge related to meliponiculture previously exercised by the residents, the method of the oral story was employed. Results and discussion Results show that the informants sampled possess knowledge of twelve categories of stingless bees (Apidae: Meliponini), classified according to morphological, behavioral and ecological characteristics. Their management techniques are represented by the making of traditional cortiço and the melliferous flora is composed of many species predominant in the Atlantic Rainforest. From recording the memories and recollections of the individuals, it was observed that an intricate system of beliefs has permeated the keeping of uruçu bees (Melipona scutellaris) for generations. Conclusion According to management techniques used by beekeepers, the keeping of stingless bees in the communities is considered a traditional activity that is embedded within a network of ecological knowledge and beliefs accumulated by generations over time, and is undergoing a process of transformation that provides new meanings to such knowledge, as can be observed in the practices of young people. PMID:24410767

  15. Antenatal memories and psychopathology

    PubMed Central

    Neighbour, Roger

    1981-01-01

    A case is described of suicidal impulses apparently stemming from the patient's experience before and during his birth. By using a technique of `rebirthing', antenatal memories were relived and their traumatic effects resolved. Theoretical and practical accounts of rebirthing are given, and its significance for general practitioners is discussed. PMID:7338871

  16. FFTs in external or hierarchical memory

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1989-01-01

    A description is given of advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory. These algorithms (1) require as few as two passes through the external data set, (2) use strictly unit stride, long vector transfers between main memory and external storage, (3) require only a modest amount of scratch space in main memory, and (4) are well suited for vector and parallel computation. Performance figures are included for implementations of some of these algorithms on Cray supercomputers. Of interest is the fact that a main memory version outperforms the current Cray library FFT routines on the Cray-2, the Cray X-MP, and the Cray Y-MP systems. Using all eight processors on the Cray Y-MP, this main memory routine runs at nearly 2 Gflops.

  17. Towards memory-aware services and browsing through lifelogging sensing.

    PubMed

    Arcega, Lorena; Font, Jaime; Cetina, Carlos

    2013-11-05

    Every day we receive lots of information through our senses that is lost forever, because it lacked the strength or the repetition needed to generate a lasting memory. Combining the emerging Internet of Things and lifelogging sensors, we believe it is possible to build up a Digital Memory (Dig-Mem) in order to complement the fallible memory of people. This work shows how to realize the Dig-Mem in terms of interactions, affinities, activities, goals and protocols. We also complement this Dig-Mem with memory-aware services and a Dig-Mem browser. Furthermore, we propose a RFID Tag-Sharing technique to speed up the adoption of Dig-Mem. Experimentation reveals an improvement of the user understanding of Dig-Mem as time passes, compared to natural memories where the level of detail decreases over time.

  18. Multilevel radiative thermal memory realized by the hysteretic metal-insulator transition of vanadium dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, Kota, E-mail: kotaito@mosk.tytlabs.co.jp; Nishikawa, Kazutaka; Iizuka, Hideo

    Thermal information processing is attracting much interest as an analog of electronic computing. We experimentally demonstrated a radiative thermal memory utilizing a phase change material. The hysteretic metal-insulator transition of vanadium dioxide (VO{sub 2}) allows us to obtain a multilevel memory. We developed a Preisach model to explain the hysteretic radiative heat transfer between a VO{sub 2} film and a fused quartz substrate. The transient response of our memory predicted by the Preisach model agrees well with the measured response. Our multilevel thermal memory paves the way for thermal information processing as well as contactless thermal management.

  19. Fabrication of a helical coil shape memory alloy actuator

    NASA Astrophysics Data System (ADS)

    Odonnell, R. E.

    1992-02-01

    A fabrication process was developed to form, heat treat, and join NiTi shape memory alloy helical coils for use as mechanical actuators. Tooling and procedures were developed to wind both extension and compression-type coils on a manual lathe. Heat treating fixtures and techniques were used to set the 'memory' of the NiTi alloy to the desired configuration. A swaging process was devised to fasten shape memory alloy extension coils to end fittings for use in actuator testing and for potential attachment to mechanical devices. The strength of this mechanical joint was evaluated.

  20. A cognitive task analysis of information management strategies in a computerized provider order entry environment.

    PubMed

    Weir, Charlene R; Nebeker, Jonathan J R; Hicken, Bret L; Campo, Rebecca; Drews, Frank; Lebar, Beth

    2007-01-01

    Computerized Provider Order Entry (CPOE) with electronic documentation, and computerized decision support dramatically changes the information environment of the practicing clinician. Prior work patterns based on paper, verbal exchange, and manual methods are replaced with automated, computerized, and potentially less flexible systems. The objective of this study is to explore the information management strategies that clinicians use in the process of adapting to a CPOE system using cognitive task analysis techniques. Observation and semi-structured interviews were conducted with 88 primary-care clinicians at 10 Veterans Administration Medical Centers. Interviews were taped, transcribed, and extensively analyzed to identify key information management goals, strategies, and tasks. Tasks were aggregated into groups, common components across tasks were clarified, and underlying goals and strategies identified. Nearly half of the identified tasks were not fully supported by the available technology. Six core components of tasks were identified. Four meta-cognitive information management goals emerged: 1) Relevance Screening; 2) Ensuring Accuracy; 3) Minimizing memory load; and 4) Negotiating Responsibility. Strategies used to support these goals are presented. Users develop a wide array of information management strategies that allow them to successfully adapt to new technology. Supporting the ability of users to develop adaptive strategies to support meta-cognitive goals is a key component of a successful system.

  1. Lowering data retention voltage in static random access memory array by post fabrication self-improvement of cell stability by multiple stress application

    NASA Astrophysics Data System (ADS)

    Mizutani, Tomoko; Takeuchi, Kiyoshi; Saraya, Takuya; Kobayashi, Masaharu; Hiramoto, Toshiro

    2018-04-01

    We propose a new version of the post fabrication static random access memory (SRAM) self-improvement technique, which utilizes multiple stress application. It is demonstrated that, using a device matrix array (DMA) test element group (TEG) with intrinsic channel fully depleted (FD) silicon-on-thin-buried-oxide (SOTB) six-transistor (6T) SRAM cells fabricated by the 65 nm technology, the lowering of data retention voltage (DRV) is more effectively achieved than using the previously proposed single stress technique.

  2. IoT security with one-time pad secure algorithm based on the double memory technique

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Grobelny, Michał; Grobelna, Iwona; Bazydło, Grzegorz

    2017-11-01

    Secure encryption of data in Internet of Things is especially important as many information is exchanged every day and the number of attack vectors on IoT elements still increases. In the paper a novel symmetric encryption method is proposed. The idea bases on the one-time pad technique. The proposed solution applies double memory concept to secure transmitted data. The presented algorithm is considered as a part of communication protocol and it has been initially validated against known security issues.

  3. Cache-based error recovery for shared memory multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Bender, S.R.

    Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are firstmore » presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.« less

  5. Reconfigurable photonic crystals enabled by pressure-responsive shape-memory polymers

    PubMed Central

    Fang, Yin; Ni, Yongliang; Leo, Sin-Yen; Taylor, Curtis; Basile, Vito; Jiang, Peng

    2015-01-01

    Smart shape-memory polymers can memorize and recover their permanent shape in response to an external stimulus (for example, heat). They have been extensively exploited for a wide spectrum of applications ranging from biomedical devices to aerospace morphing structures. However, most of the existing shape-memory polymers are thermoresponsive and their performance is hindered by heat-demanding programming and recovery steps. Although pressure is an easily adjustable process variable such as temperature, pressure-responsive shape-memory polymers are largely unexplored. Here we report a series of shape-memory polymers that enable unusual ‘cold' programming and instantaneous shape recovery triggered by applying a contact pressure at ambient conditions. Moreover, the interdisciplinary integration of scientific principles drawn from two disparate fields—the fast-growing photonic crystal and shape-memory polymer technologies—enables fabrication of reconfigurable photonic crystals and simultaneously provides a simple and sensitive optical technique for investigating the intriguing shape-memory effects at nanoscale. PMID:26074349

  6. Neural mechanism underlying autobiographical memory modulated by remoteness and emotion

    NASA Astrophysics Data System (ADS)

    Ge, Ruiyang; Fu, Yan; Wang, DaHua; Yao, Li; Long, Zhiying

    2012-03-01

    Autobiographical memory is the ability to recollect past events from one's own life. Both emotional tone and memory remoteness can influence autobiographical memory retrieval along the time axis of one's life. Although numerous studies have been performed to investigate brain regions involved in retrieving processes of autobiographical memory, the effect of emotional tone and memory age on autobiographical memory retrieval remains to be clarified. Moreover, whether the involvement of hippocampus in consolidation of autobiographical events is time dependent or independent has been controversial. In this study, we investigated the effect of memory remoteness (factor1: recent and remote) and emotional valence (factor2: positive and negative) on neural correlates underlying autobiographical memory by using functional magnetic resonance imaging (fMRI) technique. Although all four conditions activated some common regions known as "core" regions in autobiographical memory retrieval, there are some other regions showing significantly different activation for recent versus remote and positive versus negative memories. In particular, we found that bilateral hippocampal regions were activated in the four conditions regardless of memory remoteness and emotional valence. Thus, our study confirmed some findings of previous studies and provided further evidence to support the multi-trace theory which believes that the role of hippocampus involved in autobiographical memory retrieval is time-independent and permanent in memory consolidation.

  7. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  8. Documenting the Intangible and the Use of "collective Memory" as a Tool for Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Ekim, Z.; Güney, E. E.; Vatan, M.

    2017-08-01

    Increasing immigration activities due to globalized economies, political conflicts, wars and disasters of the recent years not only had a serious impact on the tangible heritage fabric, but also on the intangible values of heritage sites. With the challenges of managing drastic changes the field of heritage is faced with in mind, this paper proposes a documentation strategy that utilizes "collective memory" as a tool for risk mitigation of culturally diverse sites. Intangible and tangible values of two cases studies, from Turkey and Canada, are studied in a comparative way to create a methodology for the use of collected data on "collective memory and identity" in risk mitigation and managing change as a living value of the site.

  9. Digital Equipment Corporation VAX/VMS Version 4.3

    DTIC Science & Technology

    1986-07-30

    operating system performs process-oriented paging that allows execution of programs that may be larger than the physical memory allocated to them... to higher privileged modes. (For an explanation of how the four access modes provide memory access protection see page 9, "Memory Management".) A... to optimize program performance for real-time applications or interactive environments. July 30, 1986 - 4 - Final Evaluation Report Digital VAX/VMS

  10. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  11. AFTOMS Technology Issues and Alternatives Report

    DTIC Science & Technology

    1989-12-01

    color , resolu- power requirements, physi- tion; memory , processor speed; cal and weather rugged- IAN interfaces, etc,) f,: these ness. display...Telephone and Telegraph 3 CD-I Compact Disk - Interactive CD-ROM Compact Disk-Read Only Memory CGM Computer Graphics Metafile CNWDI Critical Nuclear...Database Management System RFP Request For Proposal 3 RFS Remote File System ROM Read Only Memory 3 S SA-ALC San Antonio Air Logistics Center 3 SAC

  12. Cultural resource applications for a GIS: Stone conservation at Jefferson and Lincoln Memorials

    USGS Publications Warehouse

    Joly, Kyle; Donald, Tony; Comer, Douglas

    1998-01-01

    Geographical information systems are rapidly becoming essential tools for land management. They provide a way to link landscape features to the wide variety of information that managers must consider when formulating plans for a site, designing site improvement and restoration projects, determining maintenance projects and protocols, and even interpreting the site. At the same time, they can be valuable research tools.Standing structures offer a different sort of geography, even though a humanly contrived one. Therefore, the capability of a geographical information system (GIS) to link geographical units to the information pertinent to the site and resource management can be employed in the management of standing structures. This was the idea that inspired the use of a GIS software, ArcView, to link computer aided design CAD) drawings of the Jefferson and Lincoln Memorials with inventories of the stones in the memorials. Both the CAD drawings and the inventory were in existence; what remained to be done was to modify the CAD files and place the inventory in an appropriately designed computerized database, and then to link the two in a GIS project. This work was carried out at the NPS Denver Service Center, Resource Planning Group, Applied Archaeology Center (DSC-RPG-AAC), in Silver Spring, Maryland, with the assistance of US/ICOMOS summer interns Katja Marasovic (Croatia) and Rastislav Gromnica (Slovakia), under the supervision of AAC office manager Douglas Comer. Project guidance was provided by Tony Donald, the Denver Service Center (DSC) project architect for the restoration of the Jefferson and Lincoln Memorials, and GIS consultation services by Kyle Joly.

  13. Techniques for Improving Spelling Performance.

    ERIC Educational Resources Information Center

    Saylor, Paul

    Improving spelling performance of college students is a question of insuring that the correct information is in long-term memory and readily retrievable. Any system of spelling instruction should recognize the capacity limits of the sensory register and short-term memory; provide for identification of and concentration on the distinctive features…

  14. Interpreter composition issues in the formal verification of a processor-memory module

    NASA Technical Reports Server (NTRS)

    Fura, David A.; Cohen, Gerald C.

    1994-01-01

    This report describes interpreter composition techniques suitable for the formal specification and verification of a processor-memory module using the HOL theorem proving system. The processor-memory module is a multichip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. Modeling and verification methods were developed that permit provably secure composition at the transaction-level of specification, significantly reducing the complexity of the hierarchical verification of the system.

  15. Distributed Sensor Networks

    DTIC Science & Technology

    1979-09-30

    University, Pittsburgh, Pennsylvania (1976). 14. R. L. Kirby, "ULISP for PDP-11s with Memory Management ," Report MCS-76-23763, University of Maryland...teletVpe or 9 raphIc S output. The recor iuL, po , uitist il so mon itot its owvn ( Onmand queue and a( knowlede commands Sent to It hN the UsCtr interfa I...kernel. By a net- work kernel we mean a multicomputer distributed operating system kernel that includes proces- sor schedulers, "core" memory managers , and

  16. Living Design Memory: Framework, Implementation, Lessons Learned.

    ERIC Educational Resources Information Center

    Terveen, Loren G.; And Others

    1995-01-01

    Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…

  17. Quantification of Load Dependent Brain Activity in Parametric N-Back Working Memory Tasks using Pseudo-continuous Arterial Spin Labeling (pCASL) Perfusion Imaging.

    PubMed

    Zou, Qihong; Gu, Hong; Wang, Danny J J; Gao, Jia-Hong; Yang, Yihong

    2011-04-01

    Brain activation and deactivation induced by N-back working memory tasks and their load effects have been extensively investigated using positron emission tomography (PET) and blood-oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI). However, the underlying mechanisms of BOLD fMRI are still not completely understood and PET imaging requires injection of radioactive tracers. In this study, a pseudo-continuous arterial spin labeling (pCASL) perfusion imaging technique was used to quantify cerebral blood flow (CBF), a well understood physiological index reflective of cerebral metabolism, in N-back working memory tasks. Using pCASL, we systematically investigated brain activation and deactivation induced by the N-back working memory tasks and further studied the load effects on brain activity based on quantitative CBF. Our data show increased CBF in the fronto-parietal cortices, thalamus, caudate, and cerebellar regions, and decreased CBF in the posterior cingulate cortex and medial prefrontal cortex, during the working memory tasks. Most of the activated/deactivated brain regions show an approximately linear relationship between CBF and task loads (0, 1, 2 and 3 back), although several regions show non-linear relationships (quadratic and cubic). The CBF-based spatial patterns of brain activation/deactivation and load effects from this study agree well with those obtained from BOLD fMRI and PET techniques. These results demonstrate the feasibility of ASL techniques to quantify human brain activity during high cognitive tasks, suggesting its potential application to assessing the mechanisms of cognitive deficits in neuropsychiatric and neurological disorders.

  18. Hippocampal 5-HT Input Regulates Memory Formation and Schaffer Collateral Excitation.

    PubMed

    Teixeira, Catia M; Rosen, Zev B; Suri, Deepika; Sun, Qian; Hersh, Marc; Sargin, Derya; Dincheva, Iva; Morgan, Ashlea A; Spivack, Stephen; Krok, Anne C; Hirschfeld-Stoler, Tessa; Lambe, Evelyn K; Siegelbaum, Steven A; Ansorge, Mark S

    2018-06-06

    The efficacy and duration of memory storage is regulated by neuromodulatory transmitter actions. While the modulatory transmitter serotonin (5-HT) plays an important role in implicit forms of memory in the invertebrate Aplysia, its function in explicit memory mediated by the mammalian hippocampus is less clear. Specifically, the consequences elicited by the spatio-temporal gradient of endogenous 5-HT release are not known. Here we applied optogenetic techniques in mice to gain insight into this fundamental biological process. We find that activation of serotonergic terminals in the hippocampal CA1 region both potentiates excitatory transmission at CA3-to-CA1 synapses and enhances spatial memory. Conversely, optogenetic silencing of CA1 5-HT terminals inhibits spatial memory. We furthermore find that synaptic potentiation is mediated by 5-HT4 receptors and that systemic modulation of 5-HT4 receptor function can bidirectionally impact memory formation. Collectively, these data reveal powerful modulatory influence of serotonergic synaptic input on hippocampal function and memory formation. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Memory consolidation in humans: new evidence and opportunities

    PubMed Central

    Maguire, Eleanor A

    2014-01-01

    We are endlessly fascinated by memory; we desire to improve it and fear its loss. While it has long been recognized that brain regions such as the hippocampus are vital for supporting memories of our past experiences (autobiographical memories), we still lack fundamental knowledge about the mechanisms involved. This is because the study of specific neural signatures of autobiographical memories in vivo in humans presents a significant challenge. However, recent developments in high-resolution structural and functional magnetic resonance imaging coupled with advanced analytical methods now permit access to the neural substrates of memory representations that has hitherto been precluded in humans. Here, I describe how the application of ‘decoding’ techniques to brain-imaging data is beginning to disclose how individual autobiographical memory representations evolve over time, deepening our understanding of systems-level consolidation. In particular, this prompts new questions about the roles of the hippocampus and ventromedial prefrontal cortex and offers new opportunities to interrogate the elusive memory trace that has for so long confounded neuroscientists. PMID:24414174

  20. The Role of Prospective Memory in Medication Adherence: A Review of an Emerging Literature

    PubMed Central

    Zogg, Jennifer B.; Woods, Steven Paul; Sauceda, John A.; Wiebe, John S.; Simoni, Jane M.

    2013-01-01

    Although neurocognitive impairment is an established risk factor for medication non-adherence, standard neurocognitive tests developed for clinical purposes may not fully capture the complexities of non-adherence behavior or effectively inform theory-driven interventions. Prospective memory, an innovative cognitive construct describing one’s ability to remember to do something at a later time, is an understudied factor in the detection and remediation of medication non-adherence. This review orients researchers to the construct of prospective memory, summarizes empirical evidence for prospective memory as a risk factor for non-adherence, discusses the relative merits of current measurement techniques, and highlights potential prospective memory-focused intervention strategies. A comprehensive literature review was conducted of published empirical studies investigating prospective memory and medication adherence. Overall, reviewed studies suggest that prospective memory is an important component of medication adherence, providing incremental ecological validity over established predictors. Findings indicate that prospective memory-based interventions might be an effective means of improving adherence. PMID:21487722

  1. An Energy-Aware Runtime Management of Multi-Core Sensory Swarms.

    PubMed

    Kim, Sungchan; Yang, Hoeseok

    2017-08-24

    In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today's sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique.

  2. An Energy-Aware Runtime Management of Multi-Core Sensory Swarms

    PubMed Central

    Kim, Sungchan

    2017-01-01

    In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today’s sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique. PMID:28837094

  3. What are the differences between long-term, short-term, and working memory?

    PubMed

    Cowan, Nelson

    2008-01-01

    In the recent literature there has been considerable confusion about the three types of memory: long-term, short-term, and working memory. This chapter strives to reduce that confusion and makes up-to-date assessments of these types of memory. Long- and short-term memory could differ in two fundamental ways, with only short-term memory demonstrating (1) temporal decay and (2) chunk capacity limits. Both properties of short-term memory are still controversial but the current literature is rather encouraging regarding the existence of both decay and capacity limits. Working memory has been conceived and defined in three different, slightly discrepant ways: as short-term memory applied to cognitive tasks, as a multi-component system that holds and manipulates information in short-term memory, and as the use of attention to manage short-term memory. Regardless of the definition, there are some measures of memory in the short term that seem routine and do not correlate well with cognitive aptitudes and other measures (those usually identified with the term "working memory") that seem more attention demanding and do correlate well with these aptitudes. The evidence is evaluated and placed within a theoretical framework depicted in Fig. 1.

  4. Limited capacity of working memory in unihemispheric random walks implies conceivable slow dispersal.

    PubMed

    Wei, Kun; Zhong, Suchuan

    2017-08-01

    Phenomenologically inspired by dolphins' unihemispheric sleep, we introduce a minimal model for random walks with physiological memory. The physiological memory consists of long-term memory which includes unconscious implicit memory and conscious explicit memory, and working memory which serves as a multi-component system for integrating, manipulating and managing short-term storage. The model assumes that the sleeping state allows retrievals of episodic objects merely from the episodic buffer where these memory objects are invoked corresponding to the ambient objects and are thus object-oriented, together with intermittent but increasing use of implicit memory in which decisions are unconsciously picked up from historical time series. The process of memory decay and forgetting is constructed in the episodic buffer. The walker's risk attitude, as a product of physiological heuristics according to the performance of objected-oriented decisions, is imposed on implicit memory. The analytical results of unihemispheric random walks with the mixture of object-oriented and time-oriented memory, as well as the long-time behavior which tends to the use of implicit memory, are provided, indicating the common sense that a conservative risk attitude is inclinable to slow movement.

  5. Neural Correlates of True and False Memory in Mild Cognitive Impairment

    PubMed Central

    Sweeney-Reed, Catherine M.; Riddell, Patricia M.; Ellis, Judi A.; Freeman, Jayne E.; Nasuto, Slawomir J.

    2012-01-01

    The goal of this research was to investigate the changes in neural processing in mild cognitive impairment. We measured phase synchrony, amplitudes, and event-related potentials in veridical and false memory to determine whether these differed in participants with mild cognitive impairment compared with typical, age-matched controls. Empirical mode decomposition phase locking analysis was used to assess synchrony, which is the first time this analysis technique has been applied in a complex cognitive task such as memory processing. The technique allowed assessment of changes in frontal and parietal cortex connectivity over time during a memory task, without a priori selection of frequency ranges, which has been shown previously to influence synchrony detection. Phase synchrony differed significantly in its timing and degree between participant groups in the theta and alpha frequency ranges. Timing differences suggested greater dependence on gist memory in the presence of mild cognitive impairment. The group with mild cognitive impairment had significantly more frontal theta phase locking than the controls in the absence of a significant behavioural difference in the task, providing new evidence for compensatory processing in the former group. Both groups showed greater frontal phase locking during false than true memory, suggesting increased searching when no actual memory trace was found. Significant inter-group differences in frontal alpha phase locking provided support for a role for lower and upper alpha oscillations in memory processing. Finally, fronto-parietal interaction was significantly reduced in the group with mild cognitive impairment, supporting the notion that mild cognitive impairment could represent an early stage in Alzheimer’s disease, which has been described as a ‘disconnection syndrome’. PMID:23118992

  6. Common Problems of Documentary Information Transfer, Storage and Retrieval in Industrial Organizations.

    ERIC Educational Resources Information Center

    Vickers, P. H.

    1983-01-01

    Examination of management information systems of three manufacturing firms highlights principal characteristics, document types and functions, main information flows, storage and retrieval systems, and common problems (corporate memory failure, records management, management information systems, general management). A literature review and…

  7. Physicians’ perceptions of capacity building for managing chronic disease in seniors using integrated interprofessional care models

    PubMed Central

    Lee, Linda; Heckman, George; McKelvie, Robert; Jong, Philip; D’Elia, Teresa; Hillier, Loretta M.

    2015-01-01

    Abstract Objective To explore the barriers to and facilitators of adapting and expanding a primary care memory clinic model to integrate care of additional complex chronic geriatric conditions (heart failure, falls, chronic obstructive pulmonary disease, and frailty) into care processes with the goal of improving outcomes for seniors. Design Mixed-methods study using quantitative (questionnaires) and qualitative (interviews) methods. Setting Ontario. Participants Family physicians currently working in primary care memory clinic teams and supporting geriatric specialists. Methods Family physicians currently working in memory clinic teams (n = 29) and supporting geriatric specialists (n = 9) were recruited as survey participants. Interviews were conducted with memory clinic lead physicians (n = 16). Statistical analysis was done to assess differences between family physician ratings and geriatric specialist ratings related to the capacity for managing complex chronic geriatric conditions, the role of interprofessional collaboration within primary care, and funding and staffing to support geriatric care. Results from both study methods were compared to identify common findings. Main findings Results indicate overall support for expanding the memory clinic model to integrate care for other complex conditions. However, the current primary care structure is challenged to support optimal management of patients with multiple comorbidities, particularly as related to limited funding and staffing resources. Structured training, interprofessional teams, and an active role of geriatric specialists within primary care were identified as important facilitators. Conclusion The memory clinic model, as applied to other complex chronic geriatric conditions, has the potential to build capacity for high-quality primary care, improve health outcomes, promote efficient use of health care resources, and reduce health care costs. PMID:25932482

  8. Arra: Tas::89 0227::Tas Recovery Act 100g Ftp: An Ultra-High Speed Data Transfer Service Over Next Generation 100 Gigabit Per Second Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    YU, DANTONG; Jin, Shudong

    2014-03-01

    Data-intensive applications, including high energy and nuclear physics, astrophysics, climate modeling, nano-scale materials science, genomics, and financing, are expected to generate exabytes of data over the coming years, which must be transferred, visualized, and analyzed by geographically distributed teams of users. High-performance network capabilities must be available to these users at the application level in a transparent, virtualized manner. Moreover, the application users must have the capability to move large datasets from local and remote locations across network environments to their home institutions. To solve these challenges, the main goal of our project is to design and evaluate high-performance datamore » transfer software to support various data-intensive applications. First, we have designed a middleware software that provides access to Remote Direct Memory Access (RDMA) functionalities. This middleware integrates network access, memory management and multitasking in its core design. We address a number of issues related to its efficient implementation, for instance, explicit buffer management and memory registration, and parallelization of RDMA operations, which are vital to delivering the benefit of RDMA to the applications. Built on top of this middleware, an implementation and experimental evaluation of the RDMA-based FTP software, RFTP, is described and evaluated. This application has been implemented by our team to exploit the full capabilities of advanced RDMA mechanisms for ultra-high speed bulk data transfer applications on Energy Sciences Network (ESnet). Second, we designed our data transfer software to optimize TCP/IP based data transfer performance such that RFTP can be fully compatible with today’s Internet. Our kernel optimization techniques with Linux system calls sendfile and splice, can reduce data copy cost. In this report, we summarize the technical challenges of our project, the primary software design methods, the major project milestones achieved, as well as the testbed evaluation work and demonstrations during our project life time.« less

  9. Using C to build a satellite scheduling expert system: Examples from the Explorer Platform planning system

    NASA Technical Reports Server (NTRS)

    Mclean, David R.; Tuchman, Alan; Potter, William J.

    1991-01-01

    A C-based artificial intelligence (AI) development effort which is based on a software tools approach is discussed with emphasis on reusability and maintainability of code. The discussion starts with simple examples of how list processing can easily be implemented in C and then proceeds to the implementations of frames and objects which use dynamic memory allocation. The implementation of procedures which use depth first search, constraint propagation, context switching, and blackboard-like simulation environment are described. Techniques for managing the complexity of C-based AI software are noted, especially the object-oriented techniques of data encapsulation and incremental development. Finally, all these concepts are put together by describing the components of planning software called the Planning And Resource Reasoning (PARR) Shell. This shell was successfully utilized for scheduling services of the Tracking and Data Relay Satellite System for the Earth Radiation Budget Satellite since May of 1987 and will be used for operations scheduling of the Explorer Platform in Nov. of 1991.

  10. Automatic aeroponic irrigation system based on Arduino’s platform

    NASA Astrophysics Data System (ADS)

    Montoya, A. P.; Obando, F. A.; Morales, J. G.; Vargas, G.

    2017-06-01

    The recirculating hydroponic culture techniques, as aeroponics, has several advantages over traditional agriculture, aimed to improve the efficiently and environmental impact of agriculture. These techniques require continuous monitoring and automation for proper operation. In this work was developed an automatic monitored aeroponic-irrigation system based on the Arduino’s free software platform. Analog and digital sensors for measuring the temperature, flow and level of a nutrient solution in a real greenhouse were implemented. In addition, the pH and electric conductivity of nutritive solutions are monitored using the Arduino’s differential configuration. The sensor network, the acquisition and automation system are managed by two Arduinos modules in master-slave configuration, which communicate one each other wireless by Wi-Fi. Further, data are stored in micro SD memories and the information is loaded on a web page in real time. The developed device brings important agronomic information when is tested with an arugula culture (Eruca sativa Mill). The system also could be employ as an early warning system to prevent irrigation malfunctions.

  11. Memory skills mediating superior memory in a world-class memorist.

    PubMed

    Ericsson, K Anders; Cheng, Xiaojun; Pan, Yafeng; Ku, Yixuan; Ge, Yi; Hu, Yi

    2017-10-01

    Laboratory studies have investigated how individuals with normal memory spans attained digit spans over 80 digits after hundreds of hours of practice. Experimental analyses of their memory skills suggested that their attained memory spans were constrained by the encoding time, for the time needed will increase if the length of digit sequences to be memorised becomes longer. These constraints seemed to be violated by a world-class memorist, Feng Wang (FW), who won the World Memory Championship by recalling 300 digits presented at 1 digit/s. In several studies we examined FW's memory skills underlying his exceptional performance. First FW reproduced his superior memory span of 200 digits under laboratory condition, and we obtained his retrospective reports describing his encoding/retrieval processes (Experiment 1). Further experiments used self-paced memorisation to identify temporal characteristics of encoding of digits in 4-digit clusters (Experiment 2), and explored memory encoding at presentation speeds much faster than 1 digit/s (Experiment 3). FW's superiority over previous digit span experts is explained by his acquisition of well-known mnemonic techniques and his training that focused on rapid memorisation. His memory performance supports the feasibility of acquiring memory skills for improved working memory based on storage in long-term memory.

  12. [The experimental evaluation with flow cytofluorimetry technique of the level of cellular immunologic memory in persons vaccinated against plague and anthrax].

    PubMed

    Bogacheva, N V; Kriuchkov, A V; Darmov, I V; Vorob'ev, K A; Pechenkin, D V; Elagin, G D; Kolesnikiov, D P

    2013-11-01

    The article deals with experimental evaluation with flow cytofluorimetry technique of the level of cellular immunologic memory in persons vaccinated with plague and anthrax live dry vaccines. It is established that the introduction of plague and anthrax live dry vaccines into organism of vaccinated persons ignites immunologic rearrangement manifested by reliable increase of level of blood concentration of Th1-lymphocytes (immunologic memory cells) against the background of vaccination. The higher correlation coefficient is detected between leucocytes lysis coefficient and stimulation coefficient according blood concentration level of T-lymphocytes predominantly at the expense of Th1-lymphocytes. The values of stimulation coefficient were calculated for corresponding blood cells of vaccinated persons. This data testifies the effectiveness of application of vaccination against plague and anthrax.

  13. Neural and Cellular Mechanisms of Fear and Extinction Memory Formation

    PubMed Central

    Orsini, Caitlin A.; Maren, Stephen

    2012-01-01

    Over the course of natural history, countless animal species have evolved adaptive behavioral systems to cope with dangerous situations and promote survival. Emotional memories are central to these defense systems because they are rapidly acquired and prepare organisms for future threat. Unfortunately, the persistence and intrusion of memories of fearful experiences are quite common and can lead to pathogenic conditions, such as anxiety and phobias. Over the course of the last thirty years, neuroscientists and psychologists alike have attempted to understand the mechanisms by which the brain encodes and maintains these aversive memories. Of equal interest, though, is the neurobiology of extinction memory formation as this may shape current therapeutic techniques. Here we review the extant literature on the neurobiology of fear and extinction memory formation, with a strong focus on the cellular and molecular mechanisms underlying these processes. PMID:22230704

  14. Towards Memory-Aware Services and Browsing through Lifelogging Sensing

    PubMed Central

    Arcega, Lorena; Font, Jaime; Cetina, Carlos

    2013-01-01

    Every day we receive lots of information through our senses that is lost forever, because it lacked the strength or the repetition needed to generate a lasting memory. Combining the emerging Internet of Things and lifelogging sensors, we believe it is possible to build up a Digital Memory (Dig-Mem) in order to complement the fallible memory of people. This work shows how to realize the Dig-Mem in terms of interactions, affinities, activities, goals and protocols. We also complement this Dig-Mem with memory-aware services and a Dig-Mem browser. Furthermore, we propose a RFID Tag-Sharing technique to speed up the adoption of Dig-Mem. Experimentation reveals an improvement of the user understanding of Dig-Mem as time passes, compared to natural memories where the level of detail decreases over time. PMID:24196436

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ho-Myoung; Kim, Hee-Dong; Kim, Tae Geun, E-mail: tgkim1@korea.ac.kr

    Graphical abstract: The degradation tendency extracted by CP technique was almost the same in both the bulk-type and TFT-type cells. - Highlights: • D{sub it} is directly investigated from bulk-type and TFT-type CTF memory. • Charge pumping technique was employed to analyze the D{sub it} information. • To apply the CP technique to monitor the reliability of the 3D NAND flash. - Abstract: The energy distribution and density of interface traps (D{sub it}) are directly investigated from bulk-type and thin-film transistor (TFT)-type charge trap flash memory cells with tunnel oxide degradation, under program/erase (P/E) cycling using a charge pumping (CP)more » technique, in view of application in a 3-demension stackable NAND flash memory cell. After P/E cycling in bulk-type devices, the interface trap density gradually increased from 1.55 × 10{sup 12} cm{sup −2} eV{sup −1} to 3.66 × 10{sup 13} cm{sup −2} eV{sup −1} due to tunnel oxide damage, which was consistent with the subthreshold swing and transconductance degradation after P/E cycling. Its distribution moved toward shallow energy levels with increasing cycling numbers, which coincided with the decay rate degradation with short-term retention time. The tendency extracted with the CP technique for D{sub it} of the TFT-type cells was similar to those of bulk-type cells.« less

  16. Using Rollback Avoidance to Mitigate Failures in Next-Generation Extreme-Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Scott N.

    2016-05-01

    High-performance computing (HPC) systems enable scientists to numerically model complex phenomena in many important physical systems. The next major milestone in the development of HPC systems is the construction of the rst supercomputer capable executing more than an exa op, 10 18 oating point operations per second. On systems of this scale, failures will occur much more frequently than on current systems. As a result, resilience is a key obstacle to building next-generation extremescale systems. Coordinated checkpointing is currently the most widely-used mechanism for handling failures on HPC systems. Although coordinated checkpointing remains e ective on current systems, increasing themore » scale of today's systems to build next-generation systems will increase the cost of fault tolerance as more and more time is taken away from the application to protect against or recover from failure. Rollback avoidance techniques seek to mitigate the cost of checkpoint/restart by allowing an application to continue its execution rather than rolling back to an earlier checkpoint when failures occur. These techniqes include failure prediction and preventive migration, replicated computation, fault-tolerant algorithms, and softwarebased memory fault correction. In this thesis, we examine how rollback avoidance techniques can be used to address failures on extreme-scale systems. Using a combination of analytic modeling and simulation, we evaluate the potential impact of rollback avoidance on these systems. We then present a novel rollback avoidance technique that exploits similarities in application memory. Finally, we examine the feasibility of using this technique to protect against memory faults in kernel memory.« less

  17. The Need for Psychoanalysis is Alive and Well in Community Psychiatry

    PubMed Central

    Bell, Carl C.

    1979-01-01

    While the author recognizes the positive impact community psychiatry has had on postpsychotic patients by the uses of medical management and environmental manipulation, he demonstrates that there is a deficiency in the treatment of lower socioeconomic patients with neurotic illnesses. Specifically, neurotic patients tend to be given supportive therapy and psychopharmacotherapy when a form of psychoanalytic psychotherapy would be more appropriate. The author supports these contentions by presenting three cases which have a diagnosis of hysterical neurosis and which clearly demonstrate the economic, topographical, structural, dynamic, and genetic components of the psychoanalytic theory. Finally, as psychoanalytic psychotherapy is too time-consuming, the author suggests that Freud's early psychoanalytic technique of symptom removal by memory recovery be used when appropriate. PMID:439170

  18. Design and feasibility of a memory intervention with focus on self-management for cognitive impairment in epilepsy.

    PubMed

    Caller, Tracie A; Secore, Karen L; Ferguson, Robert J; Roth, Robert M; Alexandre, Faith P; Henegan, Patricia L; Harrington, Jessica J; Jobst, Barbara C

    2015-03-01

    The aim of this study was to assess the feasibility of a self-management intervention targeting cognitive dysfunction to improve quality of life and reduce memory-related disability in adults with epilepsy. The intervention incorporates (1) education on cognitive function in epilepsy, (2) self-awareness training, (3) compensatory strategies, and (4) application of these strategies in day-to-day life using problem-solving therapy. In addition to the behavioral modification, formal working memory training was conducted by utilizing a commercially available program in a subgroup of patients. Our findings suggest that a self-management intervention targeting cognitive dysfunction was feasible for delivery to a rural population with epilepsy, with 13 of 16 enrolled participants completing the 8-session program. Qualitative data indicate high satisfaction and subjective improvement in cognitive functioning in day-to-day life. These findings provide support for further evaluation of the efficacy of this intervention through a randomized controlled trial. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Method and structure for an improved data reformatting procedure

    DOEpatents

    Chatterjee, Siddhartha [Yorktown Heights, NY; Gunnels, John A [Brewster, NY; Gustavson, Fred Gehrung [Briarcliff Manor, NY

    2009-06-30

    A method (and structure) of managing memory in which a low-level mechanism is executed to signal, in a sequence of instructions generated at a higher level, that at least a portion of a contiguous area of memory is permitted to be overwritten.

  20. Factors that influence effective perioperative temperature management by anesthesiologists: a qualitative study using the Theoretical Domains Framework.

    PubMed

    Boet, Sylvain; Patey, Andrea M; Baron, Justine S; Mohamed, Karim; Pigford, Ashlee-Ann E; Bryson, Gregory L; Brehaut, Jamie C; Grimshaw, Jeremy M

    2017-06-01

    Inadvertent perioperative hypothermia (IPH) is associated with a range of adverse outcomes. Safe and effective warming techniques exist to prevent IPH; however, IPH remains common. This study aimed to identify factors that anesthesiologists perceive may influence temperature management during the perioperative period. After Research Ethics Board approval, semi-structured interviews were conducted with staff anesthesiologists at a Canadian academic hospital. An interview guide based on the Theoretical Domains Framework (TDF) was used to capture 14 theoretical domains that may influence temperature management. The interview transcripts were coded using direct content analysis to generate specific beliefs and to identify relevant TDF domains perceived to influence temperature management behaviour. Data saturation was achieved after 15 interviews. The following nine theoretical domains were identified as relevant to designing an intervention for practices in perioperative temperature management: knowledge, beliefs about capabilities, beliefs about consequences, reinforcement, memory/attention/decision-making, environmental context and resources, social/professional role/identity, social influences, and behavioural regulation. Potential target areas to improve temperature management practices include interventions that address information needs about individual temperature management behaviour as well as patient outcome (feedback), increasing awareness of possible temperature management strategies and guidelines, and a range of equipment and surgical team dynamics that influence temperature management. This study identified several potential target areas for future interventions from nine of the TDF behavioural domains that anesthesiologists perceive to drive their temperature management practices. Future interventions that aim to close the evidence-practice gap in perioperative temperature management may include these targets.

  1. Beware of being captured by an analogy: dreams are like many things.

    PubMed

    Erdelyi, Matthew Hugh

    2013-12-01

    Classic traditions have linked dreams to memory (e.g., "dreaming is another kind of remembering" [Freud 1918/1955]) and modern notions like implicit memory subsume dreaming by definition. Llewellyn develops the more specific thesis that rapid eye movement (REM) dreams, because of their similarities to mnemonic techniques, have the function of elaboratively encoding episodic memories. This proposal is premature, requiring exigent testing. Other analogs of dreams, for example, jokes, do not invoke function but do contribute to dream science.

  2. Improving Memory for Optimization and Learning in Dynamic Environments

    DTIC Science & Technology

    2011-07-01

    algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in

  3. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  4. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  5. [Sustainability analysis of an evaluation policy: the case of primary health care in Brazil].

    PubMed

    Felisberto, Eronildo; Freese, Eduardo; Bezerra, Luciana Caroline Albuquerque; Alves, Cinthia Kalyne de Almeida; Samico, Isabella

    2010-06-01

    This study analyzes the sustainability of Brazil's National Policy for the Evaluation of Primary Health Care, based on the identification and categorization of representative critical events in the institutionalization process. This was an evaluative study of two analytical units: Federal Management of Primary Health Care and State Health Secretariats, using multiple case studies with data collected through interviews and institutional documents, using the critical incidents technique. Events that were temporally classified as specific to implementation, sustainability, and mixed were categorized analytically as pertaining to memory, adaptation, values, and rules. Federal management and one of the State Health Secretariats showed medium-level sustainability, while the other State Secretariat showed strong sustainability. The results indicate that the events were concurrent and suggest a weighting process, since the adaptation of activities, adequacy, and stabilization of resources displayed a strong influence on the others. Innovations and the development of technical capability are considered the most important results for sustainability.

  6. The importance of ecological memory for trophic rewilding as an ecosystem restoration approach.

    PubMed

    Schweiger, Andreas H; Boulangeat, Isabelle; Conradi, Timo; Davis, Matt; Svenning, Jens-Christian

    2018-06-06

    Increasing human pressure on strongly defaunated ecosystems is characteristic of the Anthropocene and calls for proactive restoration approaches that promote self-sustaining, functioning ecosystems. However, the suitability of novel restoration concepts such as trophic rewilding is still under discussion given fragmentary empirical data and limited theory development. Here, we develop a theoretical framework that integrates the concept of 'ecological memory' into trophic rewilding. The ecological memory of an ecosystem is defined as an ecosystem's accumulated abiotic and biotic material and information legacies from past dynamics. By summarising existing knowledge about the ecological effects of megafauna extinction and rewilding across a large range of spatial and temporal scales, we identify two key drivers of ecosystem responses to trophic rewilding: (i) impact potential of (re)introduced megafauna, and (ii) ecological memory characterising the focal ecosystem. The impact potential of (re)introduced megafauna species can be estimated from species properties such as lifetime per capita engineering capacity, population density, home range size and niche overlap with resident species. The importance of ecological memory characterising the focal ecosystem depends on (i) the absolute time since megafauna loss, (ii) the speed of abiotic and biotic turnover, (iii) the strength of species interactions characterising the focal ecosystem, and (iv) the compensatory capacity of surrounding source ecosystems. These properties related to the focal and surrounding ecosystems mediate material and information legacies (its ecological memory) and modulate the net ecosystem impact of (re)introduced megafauna species. We provide practical advice about how to quantify all these properties while highlighting the strong link between ecological memory and historically contingent ecosystem trajectories. With this newly established ecological memory-rewilding framework, we hope to guide future empirical studies that investigate the ecological effects of trophic rewilding and other ecosystem-restoration approaches. The proposed integrated conceptual framework should also assist managers and decision makers to anticipate the possible trajectories of ecosystem dynamics after restoration actions and to weigh plausible alternatives. This will help practitioners to develop adaptive management strategies for trophic rewilding that could facilitate sustainable management of functioning ecosystems in an increasingly human-dominated world. © 2018 Cambridge Philosophical Society.

  7. What are the differences between long-term, short-term, and working memory?

    PubMed Central

    Cowan, Nelson

    2008-01-01

    In the recent literature there has been considerable confusion about the three types of memory: long-term, short-term, and working memory. This chapter strives to reduce that confusion and makes up-to-date assessments of these types of memory. Long- and short-term memory could differ in two fundamental ways, with only short-term memory demonstrating (1) temporal decay and (2) chunk capacity limits. Both properties of short-term memory are still controversial but the current literature is rather encouraging regarding the existence of both decay and capacity limits. Working memory has been conceived and defined in three different, slightly discrepant ways: as short-term memory applied to cognitive tasks, as a multi-component system that holds and manipulates information in short-term memory, and as the use of attention to manage short-term memory. Regardless of the definition, there are some measures of memory in the short term that seem routine and do not correlate well with cognitive aptitudes and other measures (those usually identified with the term “working memory”) that seem more attention demanding and do correlate well with these aptitudes. The evidence is evaluated and placed within a theoretical framework depicted in Fig. 1. PMID:18394484

  8. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  9. Preclinical Magnetic Resonance Imaging and Spectroscopy Studies of Memory, Aging, and Cognitive Decline

    PubMed Central

    Febo, Marcelo; Foster, Thomas C.

    2016-01-01

    Neuroimaging provides for non-invasive evaluation of brain structure and activity and has been employed to suggest possible mechanisms for cognitive aging in humans. However, these imaging procedures have limits in terms of defining cellular and molecular mechanisms. In contrast, investigations of cognitive aging in animal models have mostly utilized techniques that have offered insight on synaptic, cellular, genetic, and epigenetic mechanisms affecting memory. Studies employing magnetic resonance imaging and spectroscopy (MRI and MRS, respectively) in animal models have emerged as an integrative set of techniques bridging localized cellular/molecular phenomenon and broader in vivo neural network alterations. MRI methods are remarkably suited to longitudinal tracking of cognitive function over extended periods permitting examination of the trajectory of structural or activity related changes. Combined with molecular and electrophysiological tools to selectively drive activity within specific brain regions, recent studies have begun to unlock the meaning of fMRI signals in terms of the role of neural plasticity and types of neural activity that generate the signals. The techniques provide a unique opportunity to causally determine how memory-relevant synaptic activity is processed and how memories may be distributed or reconsolidated over time. The present review summarizes research employing animal MRI and MRS in the study of brain function, structure, and biochemistry, with a particular focus on age-related cognitive decline. PMID:27468264

  10. Solution-processed flexible NiO resistive random access memory device

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Jung; Lee, Heon; Hong, Sung-Hoon

    2018-04-01

    Non-volatile memories (NVMs) using nanocrystals (NCs) as active materials can be applied to soft electronic devices requiring a low-temperature process because NCs do not require a heat treatment process for crystallization. In addition, memory devices can be implemented simply by using a patterning technique using a solution process. In this study, a flexible NiO ReRAM device was fabricated using a simple NC patterning method that controls the capillary force and dewetting of a NiO NC solution at low temperature. The switching behavior of a NiO NC based memory was clearly observed by conductive atomic force microscopy (c-AFM).

  11. Teaching Direct Practice Techniques for Work with Elders with Alzheimer's Disease: A Simulated Group Experience.

    ERIC Educational Resources Information Center

    Kane, Michael N.

    2003-01-01

    A role-play exercise about Alzheimer's disease was designed to teach group work with memory-impaired elders. Written comments from 26 social work students revealed four outcomes: demystifying practical knowledge, respect for diversity among memory-impaired individuals, increased awareness of elders' internal states, and awareness of the challenges…

  12. Students' Autobiographical Memory of Participation in Multiple Sport Education Seasons

    ERIC Educational Resources Information Center

    Sinelnikov, Oleg A.; Hastie, Peter A.

    2010-01-01

    This study examines the recollections of the Sport Education experiences of a cohort of students (15 boys and 19 girls) who had participated in seasons of basketball, soccer and badminton across grades six through eight (average age at data collection = 15.6 years). Using autobiographic memory theory techniques, the students completed surveys and…

  13. Argumentative Reasoning in Online Discussion.

    ERIC Educational Resources Information Center

    Steinkuehler, Constance A.; Derry, Sharon J.; Levin, Joel R.; Kim, Jong-Baeg

    This study compared the effects of three forms of online instruction on memory, belief change, and argumentation skill. Reading of a pro/con text was followed by: (1) online discussion in pairs compared to reading of the same text followed by two forms of individualized study techniques derived from the cognitive memory literature; (2)…

  14. An Exploratory Study of Listening Practice Relative to Memory Testing and Lecture in Business Administration Courses

    ERIC Educational Resources Information Center

    Peterson, Robin T.

    2007-01-01

    This study investigates the combined impact of a memory test and subsequent listening practice in enhancing student listening abilities in collegiate business administration courses. The article reviews relevant literature and describes an exploratory study that was undertaken to compare the effectiveness of this technique with traditional…

  15. Measures of Working Memory, Sequence Learning, and Speech Recognition in the Elderly.

    ERIC Educational Resources Information Center

    Humes, Larry E.; Floyd, Shari S.

    2005-01-01

    This study describes the measurement of 2 cognitive functions, working-memory capacity and sequence learning, in 2 groups of listeners: young adults with normal hearing and elderly adults with impaired hearing. The measurement of these 2 cognitive abilities with a unique, nonverbal technique capable of auditory, visual, and auditory-visual…

  16. Load Modulation of BOLD Response and Connectivity Predicts Working Memory Performance in Younger and Older Adults

    ERIC Educational Resources Information Center

    Nagel, Irene E.; Preuschhof, Claudia; Li, Shu-Chen; Nyberg, Lars; Backman, Lars; Lindenberger, Ulman; Heekeren, Hauke R.

    2011-01-01

    Individual differences in working memory (WM) performance have rarely been related to individual differences in the functional responsivity of the WM brain network. By neglecting person-to-person variation, comparisons of network activity between younger and older adults using functional imaging techniques often confound differences in activity…

  17. "Digit Anatomy": A New Technique for Learning Anatomy Using Motor Memory

    ERIC Educational Resources Information Center

    Oh, Chang-Seok; Won, Hyung-Sun; Kim, Kyong-Jee; Jang, Dong-Su

    2011-01-01

    Gestural motions of the hands and fingers are powerful tools for expressing meanings and concepts, and the nervous system has the capacity to retain multiple long-term motor memories, especially including movements of the hands. We developed many sets of successive movements of both hands, referred to as "digit anatomy," and made…

  18. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  19. Proactive Control Processes in Event-Based Prospective Memory: Evidence from Intraindividual Variability and Ex-Gaussian Analyses

    ERIC Educational Resources Information Center

    Ball, B. Hunter; Brewer, Gene A.

    2018-01-01

    The present study implemented an individual differences approach in conjunction with response time (RT) variability and distribution modeling techniques to better characterize the cognitive control dynamics underlying ongoing task cost (i.e., slowing) and cue detection in event-based prospective memory (PM). Three experiments assessed the relation…

  20. Using the Change Manager Model for the Hippocampal System to Predict Connectivity and Neurophysiological Parameters in the Perirhinal Cortex

    PubMed Central

    Coward, L. Andrew; Gedeon, Tamas D.

    2016-01-01

    Theoretical arguments demonstrate that practical considerations, including the needs to limit physiological resources and to learn without interference with prior learning, severely constrain the anatomical architecture of the brain. These arguments identify the hippocampal system as the change manager for the cortex, with the role of selecting the most appropriate locations for cortical receptive field changes at each point in time and driving those changes. This role results in the hippocampal system recording the identities of groups of cortical receptive fields that changed at the same time. These types of records can also be used to reactivate the receptive fields active during individual unique past events, providing mechanisms for episodic memory retrieval. Our theoretical arguments identify the perirhinal cortex as one important focal point both for driving changes and for recording and retrieving episodic memories. The retrieval of episodic memories must not drive unnecessary receptive field changes, and this consideration places strong constraints on neuron properties and connectivity within and between the perirhinal cortex and regular cortex. Hence the model predicts a number of such properties and connectivity. Experimental test of these falsifiable predictions would clarify how change is managed in the cortex and how episodic memories are retrieved. PMID:26819594

  1. A mega-analysis of memory reports from eight peer-reviewed false memory implantation studies.

    PubMed

    Scoboria, Alan; Wade, Kimberley A; Lindsay, D Stephen; Azad, Tanjeem; Strange, Deryn; Ost, James; Hyman, Ira E

    2017-02-01

    Understanding that suggestive practices can promote false beliefs and false memories for childhood events is important in many settings (e.g., psychotherapeutic, medical, and legal). The generalisability of findings from memory implantation studies has been questioned due to variability in estimates across studies. Such variability is partly due to false memories having been operationalised differently across studies and to differences in memory induction techniques. We explored ways of defining false memory based on memory science and developed a reliable coding system that we applied to reports from eight published implantation studies (N = 423). Independent raters coded transcripts using seven criteria: accepting the suggestion, elaboration beyond the suggestion, imagery, coherence, emotion, memory statements, and not rejecting the suggestion. Using this scheme, 30.4% of cases were classified as false memories and another 23% were classified as having accepted the event to some degree. When the suggestion included self-relevant information, an imagination procedure, and was not accompanied by a photo depicting the event, the memory formation rate was 46.1%. Our research demonstrates a useful procedure for systematically combining data that are not amenable to meta-analysis, and provides the most valid estimate of false memory formation and associated moderating factors within the implantation literature to date.

  2. Insensitivity of visual short-term memory to irrelevant visual information.

    PubMed

    Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud

    2002-07-01

    Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.

  3. Fast and memory efficient text image compression with JBIG2.

    PubMed

    Ye, Yan; Cosman, Pamela

    2003-01-01

    In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.

  4. Examining the Efficacy of the Modified Story Memory Technique (mSMT) in Persons With TBI Using Functional Magnetic Resonance Imaging (fMRI): The TBI-MEM Trial.

    PubMed

    Chiaravalloti, Nancy D; Dobryakova, Ekaterina; Wylie, Glenn R; DeLuca, John

    2015-01-01

    New learning and memory deficits are common following traumatic brain injury (TBI). Yet few studies have examined the efficacy of memory retraining in TBI through the most methodologically vigorous randomized clinical trial. Our previous research has demonstrated that the modified Story Memory Technique (mSMT) significantly improves new learning and memory in multiple sclerosis. The present double-blind, placebo-controlled, randomized clinical trial examined changes in cerebral activation on functional magnetic resonance imaging following mSMT treatment in persons with TBI. Eighteen individuals with TBI were randomly assigned to treatment (n = 9) or placebo (n = 9) groups. Baseline and follow-up functional magnetic resonance imaging was collected during a list-learning task. Significant differences in cerebral activation from before to after treatment were noted in regions belonging to the default mode network and executive control network in the treatment group only. Results are interpreted in light of these networks. Activation differences between the groups likely reflect increased use of strategies taught during treatment. This study demonstrates a significant change in cerebral activation resulting from the mSMT in a TBI sample. Findings are consistent with previous work in multiple sclerosis. Behavioral interventions can show significant changes in the brain, validating clinical utility.

  5. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  6. EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less

  7. EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-01-29

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less

  8. 77 FR 29875 - Establishment of Class E Airspace; Houston, MO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-21

    ... Instrument Approach Procedures at Houston Memorial Airport. The FAA is taking this action to enhance the safety and management of Instrument Flight Rule (IFR) operations at the airport. DATES: Effective date... accommodate new standard instrument approach procedures at Houston Memorial Airport, Houston, MO. This action...

  9. Mad-X a worthy successor for MAD8?

    NASA Astrophysics Data System (ADS)

    Schmidt, F.

    2006-03-01

    MAD-X is the successor at CERN to MAD8, a program for accelerator design and simulation with a long history. We had to give up on MAD8 since the code had evolved in such a way that the maintenance and upgrades had become increasingly difficult. In particular, the memory management with the Zebra banks seemed outdated. MAD-X was first released in June, 2002. It offers most of the MAD8 functionality, with some additions, corrections, and extensions. The most important of these extensions is the interface to PTC, the Polymorphic Tracking Code by E. Forest. The most relevant new features of MAD-X are: languages: C, Fortran77, and Fortran90; dynamic memory allocation: in the core program written in C; strictly modular organization, modified and extended input language; symplectic and arbitrary exact description of all elements via PTC; Taylor Maps and Normal Form techniques using PTC. It is also important to note that we have adopted a new style for program development and maintenance that relies heavily on active maintenance of modules by the users themselves. Proposals for collaboration as with KEK, Japan and GSI, Germany are therefore very welcome.

  10. Frequency set on systems

    NASA Astrophysics Data System (ADS)

    Wilby, W. A.; Brett, A. R. H.

    Frequency set on techniques used in ECM applications include repeater jammers, frequency memory loops (RF and optical), coherent digital RF memories, and closed loop VCO set on systems. Closed loop frequency set on systems using analog phase and frequency locking are considered to have a number of cost and performance advantages. Their performance is discussed in terms of frequency accuracy, bandwidth, locking time, stability, and simultaneous signals. Some experimental results are presented which show typical locking performance. Future ECM systems might require a response to very short pulses. Acoustooptic and fiber-optic pulse stretching techniques can be used to meet such requirements.

  11. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    NASA Astrophysics Data System (ADS)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  12. Design of a QoS-controlled ATM-based communications system in chorus

    NASA Astrophysics Data System (ADS)

    Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug

    1995-05-01

    We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.

  13. Greater Huachuca Mountains Fire Management Group

    Treesearch

    Brooke S. Gebow; Carol Lambert

    2005-01-01

    The Greater Huachuca Mountains Fire Management Group is developing a fire management plan for 500,000 acres in southeast Arizona. Partner land managers include Arizona State Parks, Arizona State Lands, Audubon Research Ranch, Coronado National Forest, Coronado National Memorial, Fort Huachuca, The Nature Conservancy, San Pedro Riparian National Conservation Area, and...

  14. Computerized scoring algorithms for the Autobiographical Memory Test.

    PubMed

    Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip

    2018-02-01

    Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Fast tracking the design of theory-based KT interventions through a consensus process.

    PubMed

    Bussières, André E; Al Zoubi, Fadi; Quon, Jeffrey A; Ahmed, Sara; Thomas, Aliki; Stuber, Kent; Sajko, Sandy; French, Simon

    2015-02-11

    Despite available evidence for optimal management of spinal pain, poor adherence to guidelines and wide variations in healthcare services persist. One of the objectives of the Canadian Chiropractic Guideline Initiative is to develop and evaluate targeted theory- and evidence-informed interventions to improve the management of non-specific neck pain by chiropractors. In order to systematically develop a knowledge translation (KT) intervention underpinned by the Theoretical Domains Framework (TDF), we explored the factors perceived to influence the use of multimodal care to manage non-specific neck pain, and mapped behaviour change techniques to key theoretical domains. Individual telephone interviews exploring beliefs about managing neck pain were conducted with a purposive sample of 13 chiropractors. The interview guide was based upon the TDF. Interviews were digitally recorded, transcribed verbatim and analysed by two independent assessors using thematic content analysis. A 15-member expert panel formally met to design a KT intervention. Nine TDF domains were identified as likely relevant. Key beliefs (and relevant domains of the TDF) included the following: influence of formal training, colleagues and patients on clinicians (Social Influences); availability of educational material (Environmental Context and Resources); and better clinical outcomes reinforcing the use of multimodal care (Reinforcement). Facilitating factors considered important included better communication (Skills); audits of patients' treatment-related outcomes (Behavioural Regulation); awareness and agreement with guidelines (Knowledge); and tailoring of multimodal care (Memory, Attention and Decision Processes). Clinicians conveyed conflicting beliefs about perceived threats to professional autonomy (Social/Professional Role and Identity) and speed of recovery from either applying or ignoring the practice recommendations (Beliefs about Consequences). The expert panel mapped behaviour change techniques to key theoretical domains and identified relevant KT strategies and modes of delivery to increase the use of multimodal care among chiropractors. A multifaceted KT educational intervention targeting chiropractors' management of neck pain was developed. The KT intervention consisted of an online education webinar series, clinical vignettes and a video underpinned by the Brief Action Planning model. The intervention was designed to reflect key theoretical domains, behaviour change techniques and intervention components. The effectiveness of the proposed intervention remains to be tested.

  16. Resistive switching characteristics of polymer non-volatile memory devices in a scalable via-hole structure.

    PubMed

    Kim, Tae-Wook; Choi, Hyejung; Oh, Seung-Hwan; Jo, Minseok; Wang, Gunuk; Cho, Byungjin; Kim, Dong-Yu; Hwang, Hyunsang; Lee, Takhee

    2009-01-14

    The resistive switching characteristics of polyfluorene-derivative polymer material in a sub-micron scale via-hole device structure were investigated. The scalable via-hole sub-microstructure was fabricated using an e-beam lithographic technique. The polymer non-volatile memory devices varied in size from 40 x 40 microm(2) to 200 x 200 nm(2). From the scaling of junction size, the memory mechanism can be attributed to the space-charge-limited current with filamentary conduction. Sub-micron scale polymer memory devices showed excellent resistive switching behaviours such as a large ON/OFF ratio (I(ON)/I(OFF) approximately 10(4)), excellent device-to-device switching uniformity, good sweep endurance, and good retention times (more than 10,000 s). The successful operation of sub-micron scale memory devices of our polyfluorene-derivative polymer shows promise to fabricate high-density polymer memory devices.

  17. Studies of short and long memory in mining-induced seismic processes

    NASA Astrophysics Data System (ADS)

    Węglarczyk, Stanisław; Lasocki, Stanisław

    2009-09-01

    Memory of a stochastic process implies its predictability, understood as a possibility to gain information on the future above the random guess level. Here we search for memory in the mining-induced seismic process (MIS), that is, a process induced or triggered by mining operations. Long memory is investigated by means of the Hurst rescaled range analysis, and the autocorrelation function estimate is used to test for short memory. Both methods are complemented with result uncertainty analyses based on different resampling techniques. The analyzed data comprise event series from Rudna copper mine in Poland. The studies show that the interevent time and interevent distance processes have both long and short memory. MIS occurrences and locations are internally interrelated. Internal relations among the sizes of MIS events are apparently weaker than those of other two studied parameterizations and are limited to long term interactions.

  18. Radiation Tolerant Intelligent Memory Stack (RTIMS)

    NASA Technical Reports Server (NTRS)

    Ng, Tak-kwong; Herath, Jeffrey A.

    2006-01-01

    The Radiation Tolerant Intelligent Memory Stack (RTIMS), suitable for both geostationary and low earth orbit missions, has been developed. The memory module is fully functional and undergoing environmental and radiation characterization. A self-contained flight-like module is expected to be completed in 2006. RTIMS provides reconfigurable circuitry and 2 gigabits of error corrected or 1 gigabit of triple redundant digital memory in a small package. RTIMS utilizes circuit stacking of heterogeneous components and radiation shielding technologies. A reprogrammable field programmable gate array (FPGA), six synchronous dynamic random access memories, linear regulator, and the radiation mitigation circuitries are stacked into a module of 42.7mm x 42.7mm x 13.00mm. Triple module redundancy, current limiting, configuration scrubbing, and single event function interrupt detection are employed to mitigate radiation effects. The mitigation techniques significantly simplify system design. RTIMS is well suited for deployment in real-time data processing, reconfigurable computing, and memory intensive applications.

  19. Large-Scale Fluorescence Calcium-Imaging Methods for Studies of Long-Term Memory in Behaving Mammals

    PubMed Central

    Jercog, Pablo; Rogerson, Thomas; Schnitzer, Mark J.

    2016-01-01

    During long-term memory formation, cellular and molecular processes reshape how individual neurons respond to specific patterns of synaptic input. It remains poorly understood how such changes impact information processing across networks of mammalian neurons. To observe how networks encode, store, and retrieve information, neuroscientists must track the dynamics of large ensembles of individual cells in behaving animals, over timescales commensurate with long-term memory. Fluorescence Ca2+-imaging techniques can monitor hundreds of neurons in behaving mice, opening exciting avenues for studies of learning and memory at the network level. Genetically encoded Ca2+ indicators allow neurons to be targeted by genetic type or connectivity. Chronic animal preparations permit repeated imaging of neural Ca2+ dynamics over multiple weeks. Together, these capabilities should enable unprecedented analyses of how ensemble neural codes evolve throughout memory processing and provide new insights into how memories are organized in the brain. PMID:27048190

  20. Interregional synaptic maps among engram cells underlie memory formation.

    PubMed

    Choi, Jun-Hyeok; Sim, Su-Eon; Kim, Ji-Il; Choi, Dong Il; Oh, Jihae; Ye, Sanghyun; Lee, Jaehyun; Kim, TaeHyun; Ko, Hyoung-Gon; Lim, Chae-Seok; Kaang, Bong-Kiun

    2018-04-27

    Memory resides in engram cells distributed across the brain. However, the site-specific substrate within these engram cells remains theoretical, even though it is generally accepted that synaptic plasticity encodes memories. We developed the dual-eGRASP (green fluorescent protein reconstitution across synaptic partners) technique to examine synapses between engram cells to identify the specific neuronal site for memory storage. We found an increased number and size of spines on CA1 engram cells receiving input from CA3 engram cells. In contextual fear conditioning, this enhanced connectivity between engram cells encoded memory strength. CA3 engram to CA1 engram projections strongly occluded long-term potentiation. These results indicate that enhanced structural and functional connectivity between engram cells across two directly connected brain regions forms the synaptic correlate for memory formation. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  1. The Identification and Assessment of Late-life ADHD in Memory Clinics

    PubMed Central

    Fischer, Barbara L.; Gunter-Hunt, Gail; Steinhafel, Courtney Holm; Howell, Timothy

    2013-01-01

    INTRODUCTION Little data exists about attention deficit hyperactivity disorder (ADHD) in late life. While evaluating patients’ memory problems, our Memory Clinic staff has periodically identified ADHD in previously undiagnosed adults. We conducted a survey to assess the extent to which other memory clinics view ADHD as a relevant clinical issue. METHOD We developed and sent a questionnaire to Memory Clinics in the United States to ascertain how ADHD was identified and addressed. The percentage of responding memory clinics’ means of assessing and managing late-life ADHD comprised the measurements for this study. RESULTS Approximately one-half of responding memory clinics reported seeing ADHD patients. Of these, one-half reported identifying previously diagnosed cases, and almost one-half reported diagnosing ADHD themselves. One fifth of clinics reported screening regularly for ADHD, and few clinics described treatment methods. CONCLUSION Our results suggest that U.S. memory clinics may not adequately identify and address ADHD in late life. PMID:22173147

  2. Drosophila SLC22A transporter is a memory suppressor gene that influences cholinergic neurotransmission to the mushroom bodies

    PubMed Central

    Gai, Yunchao; Liu, Ze; Cervantes-Sandoval, Isaac; Davis, Ronald L.

    2016-01-01

    SUMMARY The mechanisms that constrain memory formation are of special interest because they provide insights into the brain’s memory management systems and potential avenues for correcting cognitive disorders. RNAi knockdown in the Drosophila mushroom body neurons (MBn) of a newly discovered memory suppressor gene, Solute Carrier DmSLC22A, a member of the organic cation transporter family, enhances olfactory memory expression, while overexpression inhibits it. The protein localizes to the dendrites of the MBn, surrounding the presynaptic terminals of cholinergic afferent fibers from projection neurons (Pn). Cell-based expression assays show that this plasma membrane protein transports cholinergic compounds with the highest affinity among several in vitro substrates. Feeding flies choline or inhibiting acetylcholinesterase in Pn enhances memory; an effect blocked by overexpression of the transporter in the MBn. The data argue that DmSLC22A is a memory suppressor protein that limits memory formation by helping to terminate cholinergic neurotransmission at the Pn:MBn synapse. PMID:27146270

  3. Theories of Memory and Aging: A Look at the Past and a Glimpse of the Future

    PubMed Central

    Festini, Sara B.

    2017-01-01

    The present article reviews theories of memory and aging over the past 50 years. Particularly notable is a progression from early single-mechanism perspectives to complex multifactorial models proposed to account for commonly observed age deficits in memory function. The seminal mechanistic theories of processing speed, limited resources, and inhibitory deficits are discussed and viewed as especially important theories for understanding age-related memory decline. Additionally, advances in multivariate techniques including structural equation modeling provided new tools that led to the development of more complex multifactorial theories than existed earlier. The important role of neuroimaging is considered, along with the current prevalence of intervention studies. We close with predictions about new directions that future research on memory and aging will take. PMID:27257229

  4. Rethinking Extinction

    PubMed Central

    Dunsmoor, Joseph E.; Niv, Yael; Daw, Nathaniel; Phelps, Elizabeth A.

    2015-01-01

    Extinction serves as the leading theoretical framework and experimental model to describe how learned behaviors diminish through absence of anticipated reinforcement. In the past decade, extinction has moved beyond the realm of associative learning theory and behavioral experimentation in animals and has become a topic of considerable interest in the neuroscience of learning, memory, and emotion. Here, we review research and theories of extinction, both as a learning process and as a behavioral technique, and consider whether traditional understandings warrant a re-examination. We discuss the neurobiology, cognitive factors, and major computational theories, and revisit the predominant view that extinction results in new learning that interferes with expression of the original memory. Additionally, we reconsider the limitations of extinction as a technique to prevent the relapse of maladaptive behavior, and discuss novel approaches, informed by contemporary theoretical advances, that augment traditional extinction methods to target and potentially alter maladaptive memories. PMID:26447572

  5. DIFMOS - A floating-gate electrically erasable nonvolatile semiconductor memory technology. [Dual Injector Floating-gate MOS

    NASA Technical Reports Server (NTRS)

    Gosney, W. M.

    1977-01-01

    Electrically alterable read-only memories (EAROM's) or reprogrammable read-only memories (RPROM's) can be fabricated using a single-level metal-gate p-channel MOS technology with all conventional processing steps. Given the acronym DIFMOS for dual-injector floating-gate MOS, this technology utilizes the floating-gate technique for nonvolatile storage of data. Avalanche injection of hot electrons through gate oxide from a special injector diode in each bit is used to charge the floating gates. A second injector structure included in each bit permits discharge of the floating gate by avalanche injection of holes through gate oxide. The overall design of the DIFMOS bit is dictated by the physical considerations required for each of the avalanche injector types. The end result is a circuit technology which can provide fully decoded bit-erasable EAROM-type circuits using conventional manufacturing techniques.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Huilong; Hamilton, Reginald F., E-mail: rfhamilton@psu.edu; Horn, Mark W.

    NiTi shape memory alloy (SMA) thin films were fabricated using biased target ion beam deposition (BTIBD), which is a new technique for fabricating submicrometer-thick SMA thin films, and the capacity to exhibit shape memory behavior was investigated. The thermally induced shape memory effect (SME) was studied using the wafer curvature method to report the stress-temperature response. The films exhibited the SME in a temperature range above room temperature and a narrow thermal hysteresis with respect to previous reports. To confirm the underlying phase transformation, in situ x-ray diffraction was carried out in the corresponding phase transformation temperature range. The B2more » to R-phase martensitic transformation occurs, and the R-phase transformation is stable with respect to the expected conversion to the B19′ martensite phase. The narrow hysteresis and stable R-phase are rationalized in terms of the unique properties of the BTIBD technique.« less

  7. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  8. Effects of aging and divided attention on memory for items and their contexts.

    PubMed

    Craik, Fergus I M; Luo, Lin; Sakuta, Yuiko

    2010-12-01

    It is commonly found that memory for context declines disproportionately with aging, arguably due to a general age-related deficit in associative memory processes. One possible mechanism for such deficits is an age-related reduction in available processing resources. In two experiments we compared the effects of aging to the effects of division of attention in younger adults on memory for items and context. Using a technique proposed by Craik (1989), linear functions relating memory performance for items and their contexts were derived for a Young Full Attention group, a Young Divided Attention group, and an Older Adult group. Results suggested that the Old group showed an additional deficit in associative memory that was not mimicked by divided attention. It is speculated that both divided attention and aging are associated with a loss of available processing resources that may reflect inefficient frontal lobe functioning, whereas the additional age-related decrement in associative memory may reflect inefficient processing in medial-temporal regions. (c) 2010 APA, all rights reserved).

  9. Biodegradable Shape Memory Polymers in Medicine.

    PubMed

    Peterson, Gregory I; Dobrynin, Andrey V; Becker, Matthew L

    2017-11-01

    Shape memory materials have emerged as an important class of materials in medicine due to their ability to change shape in response to a specific stimulus, enabling the simplification of medical procedures, use of minimally invasive techniques, and access to new treatment modalities. Shape memory polymers, in particular, are well suited for such applications given their excellent shape memory performance, tunable materials properties, minimal toxicity, and potential for biodegradation and resorption. This review provides an overview of biodegradable shape memory polymers that have been used in medical applications. The majority of biodegradable shape memory polymers are based on thermally responsive polyesters or polymers that contain hydrolyzable ester linkages. These materials have been targeted for use in applications pertaining to embolization, drug delivery, stents, tissue engineering, and wound closure. The development of biodegradable shape memory polymers with unique properties or responsiveness to novel stimuli has the potential to facilitate the optimization and development of new medical applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. 75 FR 55846 - Draft Re-Evaluation for Environmental Impact Statement: Sikorsky Memorial Airport, Stratford, CT

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-14

    ..., Environmental Program Manager, Federal Aviation Administration New England, 12 New England Executive Park... Memorial Airport in Stratford, Connecticut. The document will assist the FAA in determining the suitability... following locations: FAA New England Region, 16 New England Executive Park, Burlington, MA, 781-238-7613...

  11. Increased functional connectivity within memory networks following memory rehabilitation in multiple sclerosis.

    PubMed

    Leavitt, Victoria M; Wylie, Glenn R; Girgis, Peter A; DeLuca, John; Chiaravalloti, Nancy D

    2014-09-01

    Identifying effective behavioral treatments to improve memory in persons with learning and memory impairment is a primary goal for neurorehabilitation researchers. Memory deficits are the most common cognitive symptom in multiple sclerosis (MS), and hold negative professional and personal consequences for people who are often in the prime of their lives when diagnosed. A 10-session behavioral treatment, the modified Story Memory Technique (mSMT), was studied in a randomized, placebo-controlled clinical trial. Behavioral improvements and increased fMRI activation were shown after treatment. Here, connectivity within the neural networks underlying memory function was examined with resting-state functional connectivity (RSFC) in a subset of participants from the clinical trial. We hypothesized that the treatment would result in increased integrity of connections within two primary memory networks of the brain, the hippocampal memory network, and the default network (DN). Seeds were placed in left and right hippocampus, and the posterior cingulate cortex. Increased connectivity was found between left hippocampus and cortical regions specifically involved in memory for visual imagery, as well as among critical hubs of the DN. These results represent the first evidence for efficacy of a behavioral intervention to impact the integrity of neural networks subserving memory functions in persons with MS.

  12. Successful life outcome and management of real-world memory demands despite profound anterograde amnesia

    PubMed Central

    Duff, Melissa C.; Wszalek, Tracey; Tranel, Daniel; Cohen, Neal J.

    2010-01-01

    We describe the case of Angie, a 50 year-old woman with profound amnesia (General Memory Index = 49, Full Scale IQ = 126) following a closed head injury in 1985. This case is unique in comparison to other cases reported in the literature in that, despite the severity of her amnesia, she has developed remarkable real-world life abilities, shows impressive self awareness and insight into the impairment and sparing of various functional memory abilities, and exhibits ongoing maturation of her identity and sense of self following amnesia. The case provides insights into the interaction of different memory and cognitive systems in handling real-world memory demands, and has implications for rehabilitation and for successful life outcome after amnesia. PMID:18608659

  13. Fuzzy associative memories

    NASA Technical Reports Server (NTRS)

    Kosko, Bart

    1991-01-01

    Mappings between fuzzy cubes are discussed. This level of abstraction provides a surprising and fruitful alternative to the propositional and predicate-calculas reasoning techniques used in expert systems. It allows one to reason with sets instead of propositions. Discussed here are fuzzy and neural function estimators, neural vs. fuzzy representation of structured knowledge, fuzzy vector-matrix multiplication, and fuzzy associative memory (FAM) system architecture.

  14. Traces of Memory for a Lost Childhood Language: The Savings Paradigm Expanded

    ERIC Educational Resources Information Center

    Isurin, Ludmila; Seidel, Christy

    2015-01-01

    The loss of a childhood language, especially in adoptees, has attracted scholars' attention in the past, but a search for any memory traces has yielded conflicting results. In a psycholinguistic tradition known as the savings paradigm, a learn-and-relearn technique is employed to examine whether the relearning of lexical items once known, often in…

  15. Extensive Lesions of Cholinergic Basal Forebrain Neurons Do Not Impair Spatial Working Memory

    ERIC Educational Resources Information Center

    Vuckovich, Joseph A.; Semel, Mara E.; Baxter, Mark G.

    2004-01-01

    A recent study suggests that lesions to all major areas of the cholinergic basal forebrain in the rat (medial septum, horizontal limb of the diagonal band of Broca, and nucleus basalis magnocellularis) impair a spatial working memory task. However, this experiment used a surgical technique that may have damaged cerebellar Purkinje cells. The…

  16. The Visual Memory-Based Memorization Techniques in Piano Education

    ERIC Educational Resources Information Center

    Yucetoker, Izzet

    2016-01-01

    Problem Statement: Johann Sebastian Bach is one of the leading composers of the baroque period. In addition to his huge contributions in the artistic dimension, he also served greatly in the field of education. This study has been done for determining the impact of visual memory-based memorization practices in the piano education on the visual…

  17. Organizational/Memory Tools: A Technique for Improving Problem Solving Skills.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    1986-01-01

    This study was conducted to determine whether students would use a computer-presented organizational/memory tool as an aid in problem solving, and whether and how locus of control would affect tool use and problem-solving performance. Learners did use the tools, which were most effective in the learner control with feedback condition. (MBR)

  18. A bio-inspired memory model for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhu, Yong

    2009-04-01

    Long-term structural health monitoring (SHM) systems need intelligent management of the monitoring data. By analogy with the way the human brain processes memories, we present a bio-inspired memory model (BIMM) that does not require prior knowledge of the structure parameters. The model contains three time-domain areas: a sensory memory area, a short-term memory area and a long-term memory area. First, the initial parameters of the structural state are specified to establish safety criteria. Then the large amount of monitoring data that falls within the safety limits is filtered while the data outside the safety limits are captured instantly in the sensory memory area. Second, disturbance signals are distinguished from danger signals in the short-term memory area. Finally, the stable data of the structural balance state are preserved in the long-term memory area. A strategy for priority scheduling via fuzzy c-means for the proposed model is then introduced. An experiment on bridge tower deformation demonstrates that the proposed model can be applied for real-time acquisition, limited-space storage and intelligent mining of the monitoring data in a long-term SHM system.

  19. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  20. Fabrication of cross-shaped Cu-nanowire resistive memory devices using a rapid, scalable, and designable inorganic-nanowire-digital-alignment technique (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Wentao; Lee, Yeongjun; Min, Sung-Yong; Park, Cheolmin; Lee, Tae-Woo

    2016-09-01

    Resistive random-access memory (RRAM) is a candidate next generation nonvolatile memory due to its high access speed, high density and ease of fabrication. Especially, cross-point-access allows cross-bar arrays that lead to high-density cells in a two-dimensional planar structure. Use of such designs could be compatible with the aggressive scaling down of memory devices, but existing methods such as optical or e-beam lithographic approaches are too complicated. One-dimensional inorganic nanowires (i-NWs) are regarded as ideal components of nanoelectronics to circumvent the limitations of conventional lithographic approaches. However, post-growth alignment of these i-NWs precisely on a large area with individual control is still a difficult challenge. Here, we report a simple, inexpensive, and rapid method to fabricate two-dimensional arrays of perpendicularly-aligned, individually-conductive Cu-NWs with a nanometer-scale CuxO layer sandwiched at each cross point, by using an inorganic-nanowire-digital-alignment technique (INDAT) and a one-step reduction process. In this approach, the oxide layer is self-formed and patterned, so conventional deposition and lithography are not necessary. INDAT eliminates the difficulties of alignment and scalable fabrication that are encountered when using currently-available techniques that use inorganic nanowires. This simple process facilitates fabrication of cross-point nonvolatile memristor arrays. Fabricated arrays had reproducible resistive switching behavior, high on/off current ratio (Ion/Ioff) 10 6 and extensive cycling endurance. This is the first report of memristors with the resistive switching oxide layer self-formed, self-patterned and self-positioned; we envision that the new features of the technique will provide great opportunities for future nano-electronic circuits.

  1. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  2. Conceptualisation of self-management intervention for people with early stage dementia.

    PubMed

    Martin, Faith; Turner, Andrew; Wallace, Louise M; Bradbury, Nicola

    2013-06-01

    Dementia is a major challenge for health and social care services. People living with dementia in the earlier stages experience a "care-gap". Although they may address this gap in care, self-management interventions have not been provided to people with dementia. It is unclear how to conceptualise self-management for this group and few published papers address intervention design. Initial focusing work used a logic mapping approach, interviews with key stakeholders, including people with dementia and their family members. An initial set of self-management targets were identified for potential intervention. Self-management for people living with dementia was conceptualised as covering five targets: (1) relationship with family, (2) maintaining an active lifestyle, (3) psychological wellbeing, (4) techniques to cope with memory changes, and (5) information about dementia. These targets were used to focus literature reviewing to explore an evidence base for the conceptualisation. We discuss the utility of the Corbin and Strauss (Unending work and care: managing chronic illness at home. Jossey-Bass, Oxford, 1988) model of self-management, specifically that self-management for people living with dementia should be conceptualised as emphasising the importance of "everyday life work" (targets 1 and 2) and "biographical work" (target 3), with inclusion of but less emphasis on specific "illness work" (targets 4, 5). We argue that self-management is possible for people with dementia, with a strengths focus and emphasis on quality of life, which can be achieved despite cognitive impairments. Further development and testing of such interventions is required to provide much needed support for people in early stages of dementia.

  3. Online assessment of risk factors for dementia and cognitive function in healthy adults.

    PubMed

    Huntley, J; Corbett, A; Wesnes, K; Brooker, H; Stenton, R; Hampshire, A; Ballard, C

    2018-02-01

    Several potentially modifiable risk factors for cognitive decline and dementia have been identified, including low educational attainment, smoking, diabetes, physical inactivity, hypertension, midlife obesity, depression, and perceived social isolation. Managing these risk factors in late midlife and older age may help reduce the risk of dementia; however, it is unclear whether these factors also relate to cognitive performance in older individuals without dementia. Data from 14 201 non-demented individuals aged >50 years who enrolled in the online PROTECT study were used to examine the relationship between cognitive function and known modifiable risk factors for dementia. Multivariate regression analyses were conducted on 4 cognitive outcomes assessing verbal and spatial working memory, visual episodic memory, and verbal reasoning. Increasing age was associated with reduced performance across all tasks. Higher educational achievement, the presence of a close confiding relationship, and moderate alcohol intake were associated with benefits across all 4 cognitive tasks, and exercise was associated with better performance on verbal reasoning and verbal working memory tasks. A diagnosis of depression was negatively associated with performance on visual episodic memory and working memory tasks, whereas being underweight negatively affected performance on all tasks apart from verbal working memory. A history of stroke was negatively associated with verbal reasoning and working memory performance. Known modifiable risk factors for dementia are associated with cognitive performance in non-demented individuals in late midlife and older age. This provides further support for public health interventions that seek to manage these risk factors across the lifespan. Copyright © 2017 John Wiley & Sons, Ltd.

  4. A fault-tolerant information processing concept for space vehicles.

    NASA Technical Reports Server (NTRS)

    Hopkins, A. L., Jr.

    1971-01-01

    A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.

  5. Current trends in cognitive rehabilitation for memory disorders.

    PubMed

    Kashima, H; Kato, M; Yoshimasu, H; Muramatsu, T

    1999-06-01

    Progress in the neuropsychology of memory disorders has provided a foundation for development of cognitive rehabilitation for amnesic patients. Accumulating evidence in the past two decades suggested that certain training techniques could be beneficial to many amnesic patients, such as teaching and acquisition of domain-specific knowledge, motor coding, reality orientation, and meta-cognition improvement. In this article we review and discuss the current trends in cognitive rehabilitation of memory disorders and provide a future direction in this emerging field. In addition, our experience in the successful rehabilitation of Korsakoff syndrome patients is also introduced.

  6. Data-driven Techniques to Estimate Parameters in the Homogenized Energy Model for Shape Memory Alloys

    DTIC Science & Technology

    2011-11-01

    sensor. volume 79781K. Proceedings of the SPIE 7978, 2011. [9] D.J. Hartl , D.C. Lagoudas, F.T. Calkins, and J.H. Mabe . Use of a ni60ti shape memory...alloy for active jet engine chevron application: I. thermomechanical characterization. Smart Materials and Structures, 19:1–14, 2010. [10] D.J. Hartl ...D.C. Lagoudas, F.T. Calkins, and J.H. Mabe . Use of a ni60ti shape memory alloy for active jet engine chevron application: II. experimentally validated

  7. Electrical studies of Ge4Sb1Te5 devices for memory applications

    NASA Astrophysics Data System (ADS)

    Sangeetha, B. G.; Shylashree, N.

    2018-05-01

    In this paper, the Ge4Sb1Te5 thin film device preparation and electrical studies for memory devices were carried out. The device was deposited using vapor-evaporation technique. RESET to SET state switching was shown using current-voltage characterization. The current-voltage characterization shows the switching between SET to RESET state and it was found that it requires a low energy for transition. Switching between amorphous to crystalline nature was studied using resistance-voltage characteristics. The endurance showed the effective use of this composition for memory device.

  8. Investigation of the development of optically controlled memory elements on the basis of multilayer semiconductor-dielectric structures

    NASA Astrophysics Data System (ADS)

    Plotnikov, A. F.; Seleznev, V. N.

    The possibility of reverse optical recording in MNOS structures of Me-Si3N4-SiO2-Si type is investigated. Charge-transfer processes in traps under the effect of electric pulses are examined, and attention is given to the application of laser switching and photoelectric reading techniques to such structures. The principal energetic and temporal characteristics of such optical memories are examined, and the organization of a high-capacity (greater than 100-million bits) optical memory is discussed.

  9. Direct evidence of detwinning in polycrystalline Ni-Mn-Ga ferromagnetic shape memory alloys during deformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, Z. H.; Lin Peng, R.; Johansson, S.

    2008-01-01

    In situ time-of-flight neutron diffraction and high-energy x-ray diffraction techniques were used to reveal the preferred reselection of martensite variants through a detwinning process in polycrystalline Ni-Mn-Ga ferromagnetic shape memory alloys under uniaxial compressive stress. The variant reorientation via detwinning during loading can be explained by considering the influence of external stress on the grain/variant orientation-dependent distortion energy. These direct observations of detwinning provide a good understanding of the deformation mechanisms in shape memory alloys.

  10. Radiative bistability and thermal memory.

    PubMed

    Kubytskyi, Viacheslav; Biehs, Svend-Age; Ben-Abdallah, Philippe

    2014-08-15

    We predict the existence of a thermal bistability in many-body systems out of thermal equilibrium which exchange heat by thermal radiation using insulator-metal transition materials. We propose a writing-reading procedure and demonstrate the possibility to exploit the thermal bistability to make a volatile thermal memory. We show that this thermal memory can be used to store heat and thermal information (via an encoding temperature) for arbitrary long times. The radiative thermal bistability could find broad applications in the domains of thermal management, information processing, and energy storage.

  11. Set-Membership Identification for Robust Control Design

    DTIC Science & Technology

    1993-04-28

    system G can be regarded as having no memory in (18) in terms of G and 0, we get of events prior to t = 1, the initial time. Roughly, this means all...algorithm in [1]. Also in our application, the size of the matrices involved is quite large and special attention should be paid to the memory ...management and algorithmic implementation; otherwise huge amounts of memory will be required to perform the optimization even for modest values of M and N

  12. Design and Implementation of a Basic Cross-Compiler and Virtual Memory Management System for the TI-59 Programmable Calculator.

    DTIC Science & Technology

    1983-06-01

    previously stated requirements to construct the framework for a software soluticn. It is during this phase of design that lany cf the most critical...the linker would have to be deferred until the compiler was formalized and ir the implementation phase of design. The second problem involved...memory liait was encountered. At this point a segmentation occurred. The memory limits were reset and the combining process continued until another

  13. A new approach for implementation of associative memory using volume holographic materials

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad; Pashaie, Ramin

    2012-02-01

    Associative memory, also known as fault tolerant or content-addressable memory, has gained considerable attention in last few decades. This memory possesses important advantages over the more common random access memories since it provides the capability to correct faults and/or partially missing information in a given input pattern. There is general consensus that optical implementation of connectionist models and parallel processors including associative memory has a better record of success compared to their electronic counterparts. In this article, we describe a novel optical implementation of associative memory which not only has the advantage of all optical learning and recalling capabilities, it can also be realized easily. We present a new approach, inspired by tomographic imaging techniques, for holographic implementation of associative memories. In this approach, a volume holographic material is sandwiched within a matrix of inputs (optical point sources) and outputs (photodetectors). The memory capacity is realized by the spatial modulation of refractive index of the holographic material. Constructing the spatial distribution of the refractive index from an array of known inputs and outputs is formulated as an inverse problem consisting a set of linear integral equations.

  14. Cognitive remediation therapy (CRT) benefits more to patients with schizophrenia with low initial memory performances.

    PubMed

    Pillet, Benoit; Morvan, Yannick; Todd, Aurelia; Franck, Nicolas; Duboc, Chloé; Grosz, Aimé; Launay, Corinne; Demily, Caroline; Gaillard, Raphaël; Krebs, Marie-Odile; Amado, Isabelle

    2015-01-01

    Cognitive deficits in schizophrenia mainly affect memory, attention and executive functions. Cognitive remediation is a technique derived from neuropsychology, which aims to improve or compensate for these deficits. Working memory, verbal learning, and executive functions are crucial factors for functional outcome. Our purpose was to assess the impact of the cognitive remediation therapy (CRT) program on cognitive difficulties in patients with schizophrenia, especially on working memory, verbal memory, and cognitive flexibility. We collected data from clinical and neuropsychological assessments in 24 patients suffering from schizophrenia (Diagnostic and Statistical Manual of mental Disorders-Fourth Edition, DSM-IV) who followed a 3-month (CRT) program. Verbal and visuo-spatial working memory, verbal memory, and cognitive flexibility were assessed before and after CRT. The Wilcoxon test showed significant improvements on the backward digit span, on the visual working memory span, on verbal memory and on flexibility. Cognitive improvement was substantial when baseline performance was low, independently from clinical benefit. CRT is effective on crucial cognitive domains and provides a huge benefit for patients having low baseline performance. Such cognitive amelioration appears highly promising for improving the outcome in cognitively impaired patients.

  15. The use of virtual reality in memory rehabilitation: current findings and future directions.

    PubMed

    Brooks, B M; Rose, F D

    2003-01-01

    There is considerable potential for using virtual reality (VR) in memory rehabilitation which is only just beginning to be realized. PC-based virtual environments are probably better suited for this purpose than more immersive virtual environments because they are relatively inexpensive and portable, and less frightening to patients. Those exploratory studies that have so far been performed indicate that VR involvement would be usefully directed towards improving assessments of memory impairments and in memory remediation using reorganization techniques. In memory assessment, the use of VR could provide more comprehensive, ecologically-valid, and controlled evaluations of prospective, incidental, and spatial memory in a rehabilitation setting than is possible using standardized assessment tests. The additional knowledge gained from these assessments could more effectively direct rehabilitation towards specific impairments of individual patients. In memory remediation, VR training has been found to promote procedural learning in people with memory impairments, and this learning has been found to transfer to improved real-world performance. Future research should investigate ways in which the procedural knowledge gained during VR interaction can be adapted to offset the many disabilities which result from different forms of memory impairment.

  16. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    PubMed

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  17. A Cognitive Task Analysis of Information Management Strategies in a Computerized Provider Order Entry Environment

    PubMed Central

    Weir, Charlene R.; Nebeker, Jonathan J.R.; Hicken, Bret L.; Campo, Rebecca; Drews, Frank; LeBar, Beth

    2007-01-01

    Objective Computerized Provider Order Entry (CPOE) with electronic documentation, and computerized decision support dramatically changes the information environment of the practicing clinician. Prior work patterns based on paper, verbal exchange, and manual methods are replaced with automated, computerized, and potentially less flexible systems. The objective of this study is to explore the information management strategies that clinicians use in the process of adapting to a CPOE system using cognitive task analysis techniques. Design Observation and semi-structured interviews were conducted with 88 primary-care clinicians at 10 Veterans Administration Medical Centers. Measurements Interviews were taped, transcribed, and extensively analyzed to identify key information management goals, strategies, and tasks. Tasks were aggregated into groups, common components across tasks were clarified, and underlying goals and strategies identified. Results Nearly half of the identified tasks were not fully supported by the available technology. Six core components of tasks were identified. Four meta-cognitive information management goals emerged: 1) Relevance Screening; 2) Ensuring Accuracy; 3) Minimizing memory load; and 4) Negotiating Responsibility. Strategies used to support these goals are presented. Conclusion Users develop a wide array of information management strategies that allow them to successfully adapt to new technology. Supporting the ability of users to develop adaptive strategies to support meta-cognitive goals is a key component of a successful system. PMID:17068345

  18. Managing Learning for Performance.

    ERIC Educational Resources Information Center

    Kuchinke, K. Peter

    1995-01-01

    Presents findings of organizational learning literature that could substantiate claims of learning organization proponents. Examines four learning processes and their contribution to performance-based learning management: knowledge acquisition, information distribution, information interpretation, and organizational memory. (SK)

  19. Development of a proto-typology of opiate overdose onset.

    PubMed

    Neale, Joanne; Bradford, Julia; Strang, John

    2017-01-01

    The time available to act is a crucial factor affecting the probable success of interventions to manage opiate overdose. We analyse opiate users' accounts of non-fatal overdose incidents to (i) construct a proto-typology of non-fatal opiate overdose onset and (ii) assess the implications for overdose management and prevention of fatalities. Re-analysis of a subset of data from a large qualitative study of non-fatal opiate overdose conducted from 1997 to 1999. Data were generated from semi-structured interviews undertaken with opiate users who had experienced a non-fatal overdose in the previous 24 hours. Forty-four participants (30 men; 14 women; aged 16-47 years) provided sufficient information for in-depth analysis. Data relating to 'memory of the moment of overdose', 'time to loss of consciousness' and 'subjective description of the overdose experience' were scrutinised using iterative categorization. Four types of overdose onset were identified: type A 'amnesic' (n = 8), characterized by no memory, rapid loss of consciousness and no description of the overdose experience; type B 'conscious' (n = 17), characterized by some memory, sustained consciousness and a description of the overdose in terms of feeling unwell and symptomatic; type C 'instant' (n = 14), characterized by some memory, immediate loss of consciousness and no description of the overdose experience; and type D 'enjoyable' (n = 5), characterized by some memory, rapid loss of consciousness and a description of the overdose experience as pleasant or positive. The identification of different types of overdose onset highlights the complexity of overdose events, the need for a range of interventions and the challenges faced in managing incidents and preventing fatalities. Opiate overdose victims who retain consciousness for a sustained period and recognize the negative signs and symptoms of overdosing could summon help or self-administer naloxone, thus indicating that opiate overdose training should incorporate self-management strategies. © 2016 Society for the Study of Addiction.

  20. LSST camera readout chip ASPIC: test tools

    NASA Astrophysics Data System (ADS)

    Antilogus, P.; Bailly, Ph; Jeglot, J.; Juramy, C.; Lebbolo, H.; Martin, D.; Moniez, M.; Tocut, V.; Wicek, F.

    2012-02-01

    The LSST camera will have more than 3000 video-processing channels. The readout of this large focal plane requires a very compact readout chain. The correlated ''Double Sampling technique'', which is generally used for the signal readout of CCDs, is also adopted for this application and implemented with the so called ''Dual Slope integrator'' method. We have designed and implemented an ASIC for LSST: the Analog Signal Processing asIC (ASPIC). The goal is to amplify the signal close to the output, in order to maximize signal to noise ratio, and to send differential outputs to the digitization. Others requirements are that each chip should process the output of half a CCD, that is 8 channels and should operate at 173 K. A specific Back End board has been designed especially for lab test purposes. It manages the clock signals, digitizes the analog differentials outputs of ASPIC and stores data into a memory. It contains 8 ADCs (18 bits), 512 kwords memory and an USB interface. An FPGA manages all signals from/to all components on board and generates the timing sequence for ASPIC. Its firmware is written in Verilog and VHDL languages. Internals registers permit to define various tests parameters of the ASPIC. A Labview GUI allows to load or update these registers and to check a proper operation. Several series of tests, including linearity, noise and crosstalk, have been performed over the past year to characterize the ASPIC at room and cold temperature. At present, the ASPIC, Back-End board and CCD detectors are being integrated to perform a characterization of the whole readout chain.

  1. Schedule-controlled learning and memory in a regulatory context

    EPA Science Inventory

    Control of behavior by the manipulation of contingencies provides powerful techniques for assessing the hazard of chemical toxicants on the nervous system. When applied to evaluate the consequences of developmental exposure, these techniques are well suited for characterizing per...

  2. View of the STS 51-L Memorial service on JSC's main mall

    NASA Technical Reports Server (NTRS)

    1986-01-01

    This high angle photo of thousands of JSC employees and family and friends of the 51-L cremembers was taken from the top of JSC's project managment building prior to memorial service. Note the bleachers that were erected overnight to accommodate the hundreds of news media here to cover the event.

  3. A Cognitive Model for Exposition of Human Deception and Counterdeception

    DTIC Science & Technology

    1987-10-01

    for understanding deception and counterdeceptlon, for developing related tactics, and for stimulating research in cognitive processes. Further...Processing Resources; Attention) BUFFER MEMORY MANAGER (Local) (Problem Solving; Learning; Procedures) BUFFER MEMORY SENSORS Visual, Auditory ...Perception and Misperception in International Politics, Princeton University Press, Princeton, NJ, 1976. Key, W.B., Subliminal Seduction. New

  4. APINetworks Java. A Java approach to the efficient treatment of large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Muñoz-Caro, Camelia; Niño, Alfonso; Reyes, Sebastián; Castillo, Miriam

    2016-10-01

    We present a new version of the core structural package of our Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. The new version is written in Java and presents several advantages over the previous C++ version: the portability of the Java code, the easiness of object-oriented design implementations, and the simplicity of memory management. In addition, some additional data structures are introduced for storing the sets of nodes and edges. Also, by resorting to the different garbage collectors currently available in the JVM the Java version is much more efficient than the C++ one with respect to memory management. In particular, the G1 collector is the most efficient one because of the parallel execution of G1 and the Java application. Using G1, APINetworks Java outperforms the C++ version and the well-known NetworkX and JGraphT packages in the building and BFS traversal of linear and complete networks. The better memory management of the present version allows for the modeling of much larger networks.

  5. Evidence for the efficacy of the MORI technique: viewers do not notice or implicitly remember details from the alternate movie version.

    PubMed

    French, Lauren; Gerrie, Matthew P; Garry, Maryanne; Mori, Kazuo

    2009-11-01

    The MORI technique provides a unique way to research social influences on memory. The technique allows people to watch different movies on the same screen at the same time without realizing that each of them sees something different. As a result, researchers can create a situation in which people feel as though they share an experience, but systematic differences are introduced into their memories, and the effect of those differences can be tracked through a discussion. Despite its methodological advances, the MORI technique has been met with criticism, mostly because reviewers are worried that the MORI technique might not completely block the alternate movie version from view, leading people in these studies to see their partner's version of the movie as well as their own. We addressed these concerns in two experiments. We found no evidence that subjects noticed the alternate movie version while watching a movie via the MORI technique (Experiment 1) and no evidence that subjects remembered details from the alternate movie version (Experiment 2). Taken together, the results provide support for the MORI technique as a valuable research tool.

  6. Shift and rotation invariant photorefractive crystal-based associative memory

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw; Lin, Wei-Feng; Lu, Ming-Huei; Lu, Guowen; Lu, Mingzhe

    1995-08-01

    A shift and rotation invariant photorefractive (PR) crystal based associative memory is addressed. The proposed associative memory has three layers: the feature extraction, inner- product, and output mapping layers. The feature extraction is performed by expanding an input object into a set of circular harmonic expansions (CHE) in the Fourier domain to acquire both the shift and rotation invariant properties. The inner product operation is performed by taking the advantage of Bragg diffraction of the bulky PR-crystal. The output mapping is achieved by using the massive storage capacity of the PR-crystal. In the training process, memories are stored in another PR-crystal by using the wavelength multiplexing technique. During the recall process, the output from the winner-take-all processor decides which wavelength should be used to read out the memory from the PR-crystal.

  7. Topological order and memory time in marginally-self-correcting quantum memory

    NASA Astrophysics Data System (ADS)

    Siva, Karthik; Yoshida, Beni

    2017-03-01

    We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.

  8. Source accuracy data reveal the thresholded nature of human episodic memory.

    PubMed

    Harlow, Iain M; Donaldson, David I

    2013-04-01

    Episodic recollection supports conscious retrieval of past events. It is unknown why recollected memories are often vivid, but at other times we struggle to remember. Such experiences might reflect a recollection threshold: Either the threshold is exceeded and information is retrieved, or recollection fails completely. Alternatively, retrieval failure could reflect weak memory: Recollection could behave as a continuous signal, always yielding some variable degree of information. Here we reconcile these views, using a novel source memory task that measures retrieval accuracy directly. We show that recollection is thresholded, such that retrieval sometimes simply fails. Our technique clarifies a fundamental property of memory and allows responses to be accurately measured, without recourse to subjective introspection. These findings raise new questions about how successful retrieval is determined and why it declines with age and disease.

  9. A survey of visual preprocessing and shape representation techniques

    NASA Technical Reports Server (NTRS)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  10. On the Performance of an Algebraic MultigridSolver on Multicore Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Schulz, M; Yang, U M

    2010-04-29

    Algebraic multigrid (AMG) solvers have proven to be extremely efficient on distributed-memory architectures. However, when executed on modern multicore cluster architectures, we face new challenges that can significantly harm AMG's performance. We discuss our experiences on such an architecture and present a set of techniques that help users to overcome the associated problems, including thread and process pinning and correct memory associations. We have implemented most of the techniques in a MultiCore SUPport library (MCSup), which helps to map OpenMP applications to multicore machines. We present results using both an MPI-only and a hybrid MPI/OpenMP model.

  11. A Bandwidth-Optimized Multi-Core Architecture for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    This paper presents an architecture template for next-generation high performance computing systems specifically targeted to irregular applications. We start our work by considering that future generation interconnection and memory bandwidth full-system numbers are expected to grow by a factor of 10. In order to keep up with such a communication capacity, while still resorting to fine-grained multithreading as the main way to tolerate unpredictable memory access latencies of irregular applications, we show how overall performance scaling can benefit from the multi-core paradigm. At the same time, we also show how such an architecture template must be coupled with specific techniquesmore » in order to optimize bandwidth utilization and achieve the maximum scalability. We propose a technique based on memory references aggregation, together with the related hardware implementation, as one of such optimization techniques. We explore the proposed architecture template by focusing on the Cray XMT architecture and, using a dedicated simulation infrastructure, validate the performance of our template with two typical irregular applications. Our experimental results prove the benefits provided by both the multi-core approach and the bandwidth optimization reference aggregation technique.« less

  12. Shift-phase code multiplexing technique for holographic memories and optical interconnection

    NASA Astrophysics Data System (ADS)

    Honma, Satoshi; Muto, Shinzo; Okamoto, Atsushi

    2008-03-01

    Holographic technologies for optical memories and interconnection devices have been studied actively because of high storage capacity, many wiring patterns and high transmission rate. Among multiplexing techniques such as angular, phase code and wavelength-multiplexing, speckle multiplexing technique have gotten attention due to the simple optical setup having an adjustable random phase filter in only one direction. To keep simple construction and to suppress crosstalk among adjacent page data or wiring patterns for efficient holographic memories and interconnection, we have to consider about optimum randomness of the phase filter. The high randomness causes expanding an illumination area of reference beam on holographic media. On the other hands, the small randomness causes the crosstalk between adjacent hologram data. We have proposed the method of holographic multiplexing, shift-phase code multiplexing with a two-dimensional orthogonal matrix phase filter. A lot of orthogonal phase codes can be produced by shifting the phase filter in one direction. It is able to read and record the individual holograms with low crosstalk. We give the basic experimental result on holographic data multiplexing and consider the phase pattern of the filter to suppress the crosstalk between adjacent holograms sufficiently.

  13. Preventing the return of fear memories with postretrieval extinction: A human study using a burst of white noise as an aversive stimulus.

    PubMed

    Fernandez-Rey, Jose; Gonzalez-Gonzalez, Daniel; Redondo, Jaime

    2018-06-07

    Standard extinction procedures seem to imply an inhibition of the fear response, but not a modification of the original fear-memory trace, which remains intact (Bouton, 2002, 2004). Typically, the behavioral procedure used to modify this trace is the so-called postretrieval extinction, consisting of fear-memory reactivation followed by extinction applied within the reconsolidation window. However, the application of this technique yields mixed results, probably due to a series of boundary conditions that limit the effectiveness of postretrieval-extinction effects. In this study a number of potential, and hitherto unexplored, moderators of such effects are considered. Using an interval of 48 hr between extinction and re-extinction, the findings show a spontaneous recovery similar to that found in studies that use a 24-hr interval. Also, the use of intervals of 10 and 20 min between reactivation and extinction led to a similar fear return. Finally, the burst of white noise used as an unconditioned stimulus (US) here was shown to be as effective as the electric shock normally used in the study of fear-memory reconsolidation. These findings suggest that postretrieval extinction is an effective behavioral technique for modifying the original fear memory and for the elimination of the fear return. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Suicide Management Plan--Post Suicide Response.

    ERIC Educational Resources Information Center

    Imhoff, Robert; Royster, Sharon

    This document contains a suicide management plan developed specifically for colleges. The suicide management plan described includes pre-planning, immediate response to the event, press releases, college staff jobs, college responses (such as memorials or scholarships), interaction with the family, and staff counseling. The plan is presented as a…

  15. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2011-01-11

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  16. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-02-21

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  17. Semantic and episodic memory in children with temporal lobe epilepsy: do they relate to literacy skills?

    PubMed

    Lah, Suncica; Smith, Mary Lou

    2014-01-01

    Children with temporal lobe epilepsy are at risk for deficits in new learning (episodic memory) and literacy skills. Semantic memory deficits and double dissociations between episodic and semantic memory have recently been found in this patient population. In the current study we investigate whether impairments of these 2 distinct memory systems relate to literacy skills. 57 children with unilateral temporal lobe epilepsy completed tests of verbal memory (episodic and semantic) and literacy skills (reading and spelling accuracy, and reading comprehension). For the entire group, semantic memory explained over 30% of variance in each of the literacy domains. Episodic memory explained a significant, but rather small proportion (< 10%) of variance in reading and spelling accuracy, but not in reading comprehension. Moreover, when children with opposite patterns of specific memory impairments (intact semantic/impaired episodic, intact episodic/impaired semantic) were compared, significant reductions in literacy skills were evident only in children with semantic memory impairments, but not in children with episodic memory impairments relative to the norms and to children with temporal lobe epilepsy who had intact memory. Our study provides the first evidence for differential relations between episodic and semantic memory impairments and literacy skills in children with temporal lobe epilepsy. As such, it highlights the urgent need to consider semantic memory deficits in management of children with temporal lobe epilepsy and undertake further research into the nature of reading difficulties of children with semantic memory impairments.

  18. Changes in brain activation during working memory and facial recognition tasks in patients with bipolar disorder with Lamotrigine monotherapy.

    PubMed

    Haldane, Morgan; Jogia, Jigar; Cobb, Annabel; Kozuch, Eliza; Kumari, Veena; Frangou, Sophia

    2008-01-01

    Verbal working memory and emotional self-regulation are impaired in Bipolar Disorder (BD). Our aim was to investigate the effect of Lamotrigine (LTG), which is effective in the clinical management of BD, on the neural circuits subserving working memory and emotional processing. Functional Magnetic Resonance Imaging data from 12 stable BD patients was used to detect LTG-induced changes as the differences in brain activity between drug-free and post-LTG monotherapy conditions during a verbal working memory (N-back sequential letter task) and an angry facial affect recognition task. For both tasks, LGT monotherapy compared to baseline was associated with increased activation mostly within the prefrontal cortex and cingulate gyrus, in regions normally engaged in verbal working memory and emotional processing. Therefore, LTG monotherapy in BD patients may enhance cortical function within neural circuits involved in memory and emotional self-regulation.

  19. Intracranial cavernous angioma: a practical review of clinical and biological aspects.

    PubMed

    Raychaudhuri, Ratul; Batjer, H Huntington; Awad, Issam A

    2005-04-01

    Cavernomas are an uncommon lesion seen in neurosurgical practice that can occasionally rupture. Recent developments in neurosurgical technique and microbiology have brought greater insight into the treatment and molecular pathogenesis of cavernoma. In this review, a historical overview of cavernous angioma, a current paradigm for treatment, promising new molecular biological developments, and suggestions for future directions in neurosurgical research are presented, with emphasis on practical clinical applications. A survey of the literature on cavernous angioma and consultation with the Department of Neurosurgery at Northwestern Memorial Hospital was conducted by the authors to gain greater insight regarding this lesion. Papers and consultation revealed the importance of careful evaluation of this lesion, new techniques such as functional magnetic resonance imaging and frameless stereotaxy that simplify clinical management of cavernomas, and potential mechanisms by which to tackle this lesion in the future. New basic knowledge on disease biology is summarized with practical applications in the clinical arena. There appear to be a number of controversies regarding management of this lesion. These include risk factors faced by the patient, controversy over the importance of resection, and modality through which the treatment should occur. An algorithm is presented to aid the neurosurgeon in management of these lesions. Exciting developments in neurosurgery and molecular biology will continue to have a major impact on clinical treatment of this disease. Unresolved issues regarding the importance of certain risk factors, the role for radiotherapy in treatments, and the underlying molecular abnormalities must be tackled to gain greater clarity in treatment of this lesion.

  20. Regulators of Long-Term Memory Revealed by Mushroom Body-Specific Gene Expression Profiling in Drosophila melanogaster.

    PubMed

    Widmer, Yves F; Bilican, Adem; Bruggmann, Rémy; Sprecher, Simon G

    2018-06-20

    Memory formation is achieved by genetically tightly controlled molecular pathways that result in a change of synaptic strength and synapse organization. While for short-term memory traces rapidly acting biochemical pathways are in place, the formation of long-lasting memories requires changes in the transcriptional program of a cell. Although many genes involved in learning and memory formation have been identified, little is known about the genetic mechanisms required for changing the transcriptional program during different phases of long-term memory formation. With Drosophila melanogaster as a model system we profiled transcriptomic changes in the mushroom body, a memory center in the fly brain, at distinct time intervals during appetitive olfactory long-term memory formation using the targeted DamID technique. We describe the gene expression profiles during these phases and tested 33 selected candidate genes for deficits in long-term memory formation using RNAi knockdown. We identified 10 genes that enhance or decrease memory when knocked-down in the mushroom body. For vajk-1 and hacd1 , the two strongest hits, we gained further support for their crucial role in appetitive learning and forgetting. These findings show that profiling gene expression changes in specific cell-types harboring memory traces provides a powerful entry point to identify new genes involved in learning and memory. The presented transcriptomic data may further be used as resource to study genes acting at different memory phases. Copyright © 2018, Genetics.

  1. Memory handling in the ATLAS submission system from job definition to sites limits

    NASA Astrophysics Data System (ADS)

    Forti, A. C.; Walker, R.; Maeno, T.; Love, P.; Rauschmayr, N.; Filipcic, A.; Di Girolamo, A.

    2017-10-01

    In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

  2. Driving working memory with frequency-tuned noninvasive brain stimulation.

    PubMed

    Albouy, Philippe; Baillet, Sylvain; Zatorre, Robert J

    2018-04-29

    Frequency-tuned noninvasive brain stimulation is a recent approach in cognitive neuroscience that involves matching the frequency of transcranially applied electromagnetic fields to that of specific oscillatory components of the underlying neurophysiology. The objective of this method is to modulate ongoing/intrinsic brain oscillations, which correspond to rhythmic fluctuations of neural excitability, to causally change behavior. We review the impact of frequency-tuned noninvasive brain stimulation on the research field of human working memory. We argue that this is a powerful method to probe and understand the mechanisms of memory functions, targeting specifically task-related oscillatory dynamics, neuronal representations, and brain networks. We report the main behavioral and neurophysiological outcomes published to date, in particular, how functionally relevant oscillatory signatures in signal power and interregional connectivity yield causal changes of working memory abilities. We also present recent developments of the technique that aim to modulate cross-frequency coupling in polyrhythmic neural activity. Overall, the method has led to significant advances in our understanding of the mechanisms of systems neuroscience, and the role of brain oscillations in cognition and behavior. We also emphasize the translational impact of noninvasive brain stimulation techniques in the development of therapeutic approaches. © 2018 New York Academy of Sciences.

  3. Ultranarrow Optical Inhomogeneous Linewidth in a Stoichiometric Rare-Earth Crystal.

    PubMed

    Ahlefeldt, R L; Hush, M R; Sellars, M J

    2016-12-16

    We obtain a low optical inhomogeneous linewidth of 25 MHz in the stoichiometric rare-earth crystal EuCl_{3}·6H_{2}O by isotopically purifying the crystal in ^{35}Cl. With this linewidth, an important limit for stoichiometric rare-earth crystals is surpassed: the hyperfine structure of ^{153}Eu is spectrally resolved, allowing the whole population of ^{153}Eu^{3+} ions to be prepared in the same hyperfine state using hole-burning techniques. This material also has a very high optical density, and can have long coherence times when deuterated. This combination of properties offers new prospects for quantum information applications. We consider two of these: quantum memories and quantum many-body studies. We detail the improvements in the performance of current memory protocols possible in these high optical depth crystals, and describe how certain memory protocols, such as off-resonant Raman memories, can be implemented for the first time in a solid-state system. We explain how the strong excitation-induced interactions observed in this material resemble those seen in Rydberg systems, and describe how these interactions can lead to quantum many-body states that could be observed using standard optical spectroscopy techniques.

  4. Self-healing bolted joint employing a shape memory actuator

    NASA Astrophysics Data System (ADS)

    Muntges, Daniel E.; Park, Gyuhae; Inman, Daniel J.

    2001-08-01

    This paper is a report of an initial investigation into the active control of preload in the joint using a shape memory actuator around the axis of the bolt shaft. Specifically, the actuator is a cylindrical Nitinol washer that expands axially when heated, according to the shape memory effect. The washer is actuated in response to an artificial decrease in torque. Upon actuation, the stress generated by its axial strain compresses the bolted members and creates a frictional force that has the effect of generating a preload and restoring lost torque. In addition to torque wrenches, the system in question was monitored in all stages of testing using piezoelectric impedance analysis. Impedance analysis drew upon research techniques developed at Center for Intelligent Material Systems and Structures, in which phase changes in the impedance of a self-sensing piezoceramic actuator correspond to changes in joint stiffness. Through experimentation, we have documented a successful actuation of the shape memory element. Due to complexity of constitutive modeling, qualitative analysis by the impedance method is used to illustrate the success. Additional considerations encountered in this initial investigation are made to guide further thorough research required for the successful commercial application of this promising technique.

  5. Translational Approaches Targeting Reconsolidation

    PubMed Central

    Kroes, Marijn C.W.; LeDoux, Joseph E.; Phelps, Elizabeth A.

    2017-01-01

    Maladaptive learned responses and memories contribute to psychiatric disorders that constitute a significant socio-economic burden. Primary treatment methods teach patients to inhibit maladaptive responses, but do not get rid of the memory itself, which explains why many patients experience a return of symptoms even after initially successful treatment. This highlights the need to discover more persistent and robust techniques to diminish maladaptive learned behaviours. One potentially promising approach is to alter the original memory, as opposed to inhibiting it, by targeting memory reconsolidation. Recent research shows that reactivating an old memory results in a period of memory flexibility and requires restorage, or reconsolidation, for the memory to persist. This reconsolidation period allows a window for modification of a specific old memory. Renewal of memory flexibility following reactivation holds great clinical potential as it enables targeting reconsolidation and changing of specific learned responses and memories that contribute to maladaptive mental states and behaviours. Here, we will review translational research on non-human animals, healthy human subjects, and clinical populations aimed at altering memories by targeting reconsolidation using biological treatments (electrical stimulation, noradrenergic antagonists) or behavioural interference (reactivation–extinction paradigm). Both approaches have been used successfully to modify aversive and appetitive memories, yet effectiveness in treating clinical populations has been limited. We will discuss that memory flexibility depends on the type of memory tested and the brain regions that underlie specific types of memory. Further, when and how we can most effectively reactivate a memory and induce flexibility is largely unclear. Finally, the development of drugs that can target reconsolidation and are safe for use in humans would optimize cross-species translations. Increasing the understanding of the mechanism and limitations of memory flexibility upon reactivation should help optimize efficacy of treatments for psychiatric patients. PMID:27240676

  6. Evidence from a partial report task for forgetting in dynamic spatial memory.

    PubMed

    Gugerty, L

    1998-09-01

    G. Sperling (1960) and others have investigated memory for briefly presented stimuli by using a partial versus whole report technique in which participants sometimes reported part of a stimulus array and sometimes reported all of it. For simple, static stimulus displays, the partial report technique showed that participants could recall most of the information in the stimulus array but that this information faded quickly when participants engaged in whole report recall. An experiment was conducted that applied the partial report method to a task involving complex displays of moving objects. In the experiment, 26 participants viewed cars in a low-fidelity driving simulator and then reported the locations of some or all of the cars in each scene. A statistically significant advantage was found for the partial report trials. This finding suggests that detailed spatial location information was forgotten from dynamic spatial memory over the 14 s that it took participants to recall whole report trials. The experiment results suggest better ways of measuring situation awareness. Partial report recall techniques may give a more accurate measure of people's momentary situation awareness than whole report techniques. Potential applications of this research include simulator-based measures of situation awareness ability that can be part of inexpensive test batteries to select people for real-time tasks (e.g., in a driver licensing battery) and to identify people who need additional training.

  7. Analysis of memory consolidation and evocation in rats by proton induced X-ray emission

    NASA Astrophysics Data System (ADS)

    Jobim, P. F. C.; dos Santos, C. E. I.; Maurmann, N.; Reolon, G. K.; Debastiani, R.; Pedroso, T. R.; Carvalho, L. M.; Dias, J. F.

    2014-08-01

    It is well known that trace elements such as Mg, Ca, Fe, Cu and Zn have a key role in synapse plasticity and learning. Learning process is conventionally divided in three distinct and complementary stages: memory acquisition, consolidation and evocation. Consolidation is the stabilization of the synaptic trace formed by acquisition, while evocation is the recall of this trace. Ion-based techniques capable of providing information concerning the elemental composition of organic tissues may be helpful to improve our understanding on memory consolidation and evocation processes. In particular, the Particle-Induced X-ray Emission (PIXE) technique can be used to analyze different biological tissues with good accuracy. In this work we explore the versatility of PIXE to measure the elemental concentrations in rat brain tissues in order to establish any possible correlation between them and the memory consolidation and evocation processes. To this end, six groups of middle-age male Wistar rats were trained and tested in a step-down Inhibitory Avoidance conditioning. After the behavior tests, the animals were decapitated in accordance with the legal procedures and their brains were removed and dissected for the PIXE analyses. The results demonstrated that there are differences in the elemental concentration among the groups and such variations may be associated with their availability to the learning processes (by memory consolidation and evocation). Moreover, the control groups circumvent the possibility that a non-specific event involved in learning tasks cause such variations. Our results suggest that PIXE may be a useful tool to investigate memory consolidation and evocation in animal models.

  8. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  10. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  11. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana

    The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less

  13. Extracting climate memory using Fractional Integrated Statistical Model: A new perspective on climate prediction

    PubMed Central

    Yuan, Naiming; Fu, Zuntao; Liu, Shida

    2014-01-01

    Long term memory (LTM) in climate variability is studied by means of fractional integral techniques. By using a recently developed model, Fractional Integral Statistical Model (FISM), we in this report proposed a new method, with which one can estimate the long-lasting influences of historical climate states on the present time quantitatively, and further extract the influence as climate memory signals. To show the usability of this method, two examples, the Northern Hemisphere monthly Temperature Anomalies (NHTA) and the Pacific Decadal Oscillation index (PDO), are analyzed in this study. We find the climate memory signals indeed can be extracted and the whole variations can be further decomposed into two parts: the cumulative climate memory (CCM) and the weather-scale excitation (WSE). The stronger LTM is, the larger proportion the climate memory signals will account for in the whole variations. With the climate memory signals extracted, one can at least determine on what basis the considered time series will continue to change. Therefore, this report provides a new perspective on climate prediction. PMID:25300777

  14. Mechanical properties and shape memory effect of thermal-responsive polymer based on PVA

    NASA Astrophysics Data System (ADS)

    Lin, Liulan; Zhang, Lingfeng; Guo, Yanwei

    2018-01-01

    In this study, the effect of content of glutaraldehyde (GA) on the shape memory behavior of a shape memory polymer based on polyvinyl alcohol chemically cross-linked with GA was investigated. Thermal-responsive shape memory composites with three different GA levels, GA-PVA (3 wt%, 5 wt%, 7 wt%), were prepared by particle melting, mold forming and freeze-drying technique. The mechanical properties, thermal properties and shape memory behavior were measured by differential scanning calorimeter, physical bending test and cyclic thermo-mechanical test. The addition of GA to PVA led to a steady shape memory transition temperature and an improved mechanical compressive strength. The composite with 5 wt% of GA exhibited the best shape recoverability. Further increase in the crosslinking agent content of GA would reduce the recovery force and prolong the recovery time due to restriction in the movement of the soft PVA chain segments. These results provide important information for the study on materials in 4D printing.

  15. Learning, Memory, and Transcranial Direct Current Stimulation

    PubMed Central

    Brasil-Neto, Joaquim P.

    2012-01-01

    Transcranial direct current stimulation (tDCS) has been the subject of many studies concerning its possible cognitive effects. One of the proposed mechanisms of action for neuromodulatory techniques, such as transcranial magnetic stimulation and tDCS is induction of long-term potentiation (LTP) and long-term depression (LTD)-like phenomena. LTP and LTD are also among the most important neurobiological processes involved in memory and learning. This fact has led to an immediate interest in the study of possible effects of tDCS on memory consolidation, retrieval, or learning of various tasks. This review analyses published articles describing beneficial or disruptive effects of tDCS on memory and learning in normal subjects. The most likely mechanisms underlying these effects are discussed. PMID:22969734

  16. MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.

    PubMed

    Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd

    2018-07-01

    Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.

  17. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, Kamesh

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less

  18. SenseCam: A new tool for memory rehabilitation?

    PubMed

    Dubourg, L; Silva, A R; Fitamen, C; Moulin, C J A; Souchay, C

    2016-12-01

    The emergence of life-logging technologies has led neuropsychologist to focus on understanding how this new technology could help patients with memory disorders. Despite the growing number of studies using life-logging technologies, a theoretical framework supporting its effectiveness is lacking. This review focuses on the use of life-logging in the context of memory rehabilitation, particularly the use of SenseCam, a wearable camera allowing passive image capture. In our opinion, reviewing SenseCam images can be effective for memory rehabilitation only if it provides more than an assessment of prior occurrence in ways that reinstates previous thoughts, feelings and sensory information, thus stimulating recollection. Considering the fact that, in memory impairment, self-initiated processes are impaired, we propose that the environmental support hypothesis can explain the value of SenseCam for memory retrieval. Twenty-five research studies were selected for this review and despite the general acceptance of the value of SenseCam as a memory technique, only a small number of studies focused on recollection. We discuss the usability of this tool to improve episodic memory and in particular, recollection. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. Memory Flexibility training (MemFlex) to reduce depressive symptomatology in individuals with major depressive disorder: study protocol for a randomised controlled trial.

    PubMed

    Hitchcock, Caitlin; Hammond, Emily; Rees, Catrin; Panesar, Inderpal; Watson, Peter; Werner-Seidler, Aliza; Dalgleish, Tim

    2015-11-03

    Major depressive disorder (MDD) is associated with chronic biases in the allocation of attention and recollection of personal memories. Impaired flexibility in attention and autobiographical memory retrieval is seen to both maintain current symptoms and predict future depression. Development of innovative interventions to reduce maladaptive cognitive patterns and improve cognitive flexibility in the domain of memory may therefore advance current treatment approaches for depression. Memory specificity training and cognitive bias modification techniques have both shown some promise in improving cognitive flexibility. Here we outline plans for a trial of an innovative memory flexibility training programme, MemFlex, which advances current training techniques with the aim of improving flexibility of autobiographical memory retrieval. This trial seeks to estimate the efficacy of MemFlex, provide data on feasibility, and begin to explore mechanisms of change. We plan a single-blind, randomised, controlled, patient-level trial in which 50 individuals with MDD will complete either psychoeducation (n = 25) or MemFlex (n = 25). After completing pre-treatment measures and an orientation session, participants complete eight workbook-based sessions at home. Participants will then be assessed at post-treatment and at 3 month follow-up. The co-primary outcomes are depressive symptoms and diagnostic status at 3 month follow-up. The secondary outcomes are memory flexibility at post-treatment and number of depression free days at 3 month follow-up. Other process outcomes and mediators of any treatment effects will also be explored. This trial will establish the efficacy of MemFlex in improving memory flexibility, and reducing depressive symptoms. Any effects on process measures related to relapse may also indicate whether MemFlex may be helpful in reducing vulnerability to future depressive episodes. The low-intensity and workbook-based format of the programme may improve access to psychological therapies, and, if encouraging, the results of this study will provide a platform for later-phase trials. NCT02371291 (ClinicalTrials.gov), registered 9 February 2015.

  20. Acceptability and Feasibility Results of a Strength-Based Skills Training Program for Dementia Caregiving Dyads

    ERIC Educational Resources Information Center

    Judge, Katherine S.; Yarry, Sarah J.; Orsulic-Jeras, Silvia

    2010-01-01

    Purpose: The current article provides an in-depth description of a dyadic intervention for individuals with dementia and their family caregivers. Using a strength-based approach, caregiving dyads received skills training across 5 key areas: (a) education regarding dementia and memory loss, (b) effective communication, (c) managing memory loss, (d)…

  1. The Effects of Split-Attention and Redundancy on Cognitive Load When Learning Cognitive and Psychomotor Tasks

    ERIC Educational Resources Information Center

    Pociask, Fredrick D.; Morrison, Gary

    2004-01-01

    Human working memory can be defined as a component system responsible for the temporary storage and manipulation of information related to higher level cognitive behaviors, such as understanding and reasoning (Baddeley, 1992; Becker & Morris, 1999). Working memory, while able to manage a complex array of cognitive activities, presents with an…

  2. About the Library - Betty Petersen Memorial Library

    Science.gov Websites

    , Maryland. History and Mission: Betty Petersen Memorial Library began as a reading room in the NOAA Science Center. The Reading Room was founded as a labor of love by Betty Petersen, librarian and wife of Dr Petersen managed the NOAA Science Center Reading Room from 1989 to 1994, when she transferred to the NOAA

  3. Articulatory Suppression in Language Interpretation: Working Memory Capacity, Dual Tasking and Word Knowledge

    ERIC Educational Resources Information Center

    Padilla, Francisca; Bajo, Maria Teresa; Macizo, Pedro

    2005-01-01

    How do interpreters manage to cope with the adverse effects of concurrent articulation while trying to comprehend the message in the source language? In Experiments 1-3, we explored three possible working memory (WM) functions that may underlie the ability to simultaneously comprehend and produce in the interpreters: WM storage capacity,…

  4. Efficient accesses of data structures using processing near memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayasena, Nuwan S.; Zhang, Dong Ping; Diez, Paula Aguilera

    Systems, apparatuses, and methods for implementing efficient queues and other data structures. A queue may be shared among multiple processors and/or threads without using explicit software atomic instructions to coordinate access to the queue. System software may allocate an atomic queue and corresponding queue metadata in system memory and return, to the requesting thread, a handle referencing the queue metadata. Any number of threads may utilize the handle for accessing the atomic queue. The logic for ensuring the atomicity of accesses to the atomic queue may reside in a management unit in the memory controller coupled to the memory wheremore » the atomic queue is allocated.« less

  5. APPLICATION OF A FINITE-DIFFERENCE TECHNIQUE TO THE HUMAN RADIOFREQUENCY DOSIMETRY PROBLEM

    EPA Science Inventory

    A powerful finite difference numerical technique has been applied to the human radiofrequency dosimetry problem. The method possesses inherent advantages over the method of moments approach in that its implementation requires much less computer memory. Consequently, it has the ca...

  6. An expanded role for neuroimaging in the evaluation of memory impairment

    PubMed Central

    Desikan, Rahul S.; Rafii, Michael S.; Brewer, James B.; Hess, Christopher P.

    2014-01-01

    Alzheimer’s disease (AD) affects millions of people worldwide. The neuropathologic process underlying AD begins years, if not decades, before the onset of memory decline. Recent advances in neuroimaging suggest that it is now possible to detect AD-associated neuropathological changes well before dementia onset. Here, we evaluate the role of recently developed in vivo biomarkers in the clinical evaluation of AD. We discuss how assessment strategies might incorporate neuroimaging markers to better inform patients, families and clinicians when memory impairment prompts a search for diagnosis and management options. PMID:23764728

  7. Whorfian effects on colour memory are not reliable.

    PubMed

    Wright, Oliver; Davies, Ian R L; Franklin, Anna

    2015-01-01

    The Whorfian hypothesis suggests that differences between languages cause differences in cognitive processes. Support for this idea comes from studies that find that patterns of colour memory errors made by speakers of different languages align with differences in colour lexicons. The current study provides a large-scale investigation of the relationship between colour language and colour memory, adopting a cross-linguistic and developmental approach. Colour memory on a delayed matching-to-sample (XAB) task was investigated in 2 language groups with differing colour lexicons, for 3 developmental stages and 2 regions of colour space. Analyses used a Bayesian technique to provide simultaneous assessment of two competing hypotheses (H1-Whorfian effect present, H0-Whorfian effect absent). Results of the analyses consistently favoured H0. The findings suggest that Whorfian effects on colour memory are not reliable and that the importance of such effects should not be overestimated.

  8. A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement

    PubMed Central

    Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2016-01-01

    Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates. PMID:26763827

  9. A wearable multiplexed silicon nonvolatile memory array using nanocrystal charge confinement.

    PubMed

    Kim, Jaemin; Son, Donghee; Lee, Mincheol; Song, Changyeong; Song, Jun-Kyul; Koo, Ja Hoon; Lee, Dong Jun; Shim, Hyung Joon; Kim, Ji Hoon; Lee, Minbaek; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2016-01-01

    Strategies for efficient charge confinement in nanocrystal floating gates to realize high-performance memory devices have been investigated intensively. However, few studies have reported nanoscale experimental validations of charge confinement in closely packed uniform nanocrystals and related device performance characterization. Furthermore, the system-level integration of the resulting devices with wearable silicon electronics has not yet been realized. We introduce a wearable, fully multiplexed silicon nonvolatile memory array with nanocrystal floating gates. The nanocrystal monolayer is assembled over a large area using the Langmuir-Blodgett method. Efficient particle-level charge confinement is verified with the modified atomic force microscopy technique. Uniform nanocrystal charge traps evidently improve the memory window margin and retention performance. Furthermore, the multiplexing of memory devices in conjunction with the amplification of sensor signals based on ultrathin silicon nanomembrane circuits in stretchable layouts enables wearable healthcare applications such as long-term data storage of monitored heart rates.

  10. Experimentally modeling stochastic processes with less memory by the use of a quantum processor

    PubMed Central

    Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.

    2017-01-01

    Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218

  11. Visual-spatial abilities relate to mathematics achievement in children with heavy prenatal alcohol exposure

    PubMed Central

    Crocker, N.; Riley, E.P.; Mattson, S.N.

    2014-01-01

    Objective The current study examined the relationship between mathematics and attention, working memory, and visual memory in children with heavy prenatal alcohol exposure and controls. Method Fifty-six children (29 AE, 27 CON) were administered measures of global mathematics achievement (WRAT-3 Arithmetic & WISC-III Written Arithmetic), attention, (WISC-III Digit Span forward and Spatial Span forward), working memory (WISC-III Digit Span backward and Spatial Span backward), and visual memory (CANTAB Spatial Recognition Memory and Pattern Recognition Memory). The contribution of cognitive domains to mathematics achievement was analyzed using linear regression techniques. Attention, working memory and visual memory data were entered together on step 1 followed by group on step 2, and the interaction terms on step 3. Results Model 1 accounted for a significant amount of variance in both mathematics achievement measures, however, model fit improved with the addition of group on step 2. Significant predictors of mathematics achievement were Spatial Span forward and backward and Spatial Recognition Memory. Conclusions These findings suggest that deficits in spatial processing may be related to math impairments seen in FASD. In addition, prenatal alcohol exposure was associated with deficits in mathematics achievement, above and beyond the contribution of general cognitive abilities. PMID:25000323

  12. Visual-spatial abilities relate to mathematics achievement in children with heavy prenatal alcohol exposure.

    PubMed

    Crocker, Nicole; Riley, Edward P; Mattson, Sarah N

    2015-01-01

    The current study examined the relationship between mathematics and attention, working memory, and visual memory in children with heavy prenatal alcohol exposure and controls. Subjects were 56 children (29 AE, 27 CON) who were administered measures of global mathematics achievement (WRAT-3 Arithmetic & WISC-III Written Arithmetic), attention, (WISC-III Digit Span forward and Spatial Span forward), working memory (WISC-III Digit Span backward and Spatial Span backward), and visual memory (CANTAB Spatial Recognition Memory and Pattern Recognition Memory). The contribution of cognitive domains to mathematics achievement was analyzed using linear regression techniques. Attention, working memory, and visual memory data were entered together on Step 1 followed by group on Step 2, and the interaction terms on Step 3. Model 1 accounted for a significant amount of variance in both mathematics achievement measures; however, model fit improved with the addition of group on Step 2. Significant predictors of mathematics achievement were Spatial Span forward and backward and Spatial Recognition Memory. These findings suggest that deficits in spatial processing may be related to math impairments seen in FASD. In addition, prenatal alcohol exposure was associated with deficits in mathematics achievement, above and beyond the contribution of general cognitive abilities. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  13. Execute-Only Attacks against Execute-Only Defenses

    DTIC Science & Technology

    2015-11-13

    attacks that have been widely used to bypass randomization-based memory corruption defenses. A recent technique, Readactor, provides one of the... corruption defenses with various impacts. We analyze the prevalence of opportunities for such attacks in popular code bases and build two proof-of-concept...our countermeasures introduce only a modest additional overhead. I. INTRODUCTION Memory corruption has been a primary vector of attacks against

  14. Memory reconsolidation, repeating, and working through: Science and culture in psychotherapeutic research and practice.

    PubMed

    Levin, Charles

    2015-01-01

    Hypothesizing that an effective common feature in divergent forms of psychotherapy is a process of memory reconsolidation integrating new emotional experiences, Lane et al. usefully shift the focus away from established and/or specialized techniques to deeper questions about the underlying principles of psychotherapeutic change. More research attention to cultural factors influencing the definition and treatment of psychopathology is also needed.

  15. Scalability Issues for Remote Sensing Infrastructure: A Case Study.

    PubMed

    Liu, Yang; Picard, Sean; Williamson, Carey

    2017-04-29

    For the past decade, a team of University of Calgary researchers has operated a large "sensor Web" to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system's memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  16. Matrix Algebra for GPU and Multicore Architectures (MAGMA) for Large Petascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack J.; Tomov, Stanimire

    2014-03-24

    The goal of the MAGMA project is to create a new generation of linear algebra libraries that achieve the fastest possible time to an accurate solution on hybrid Multicore+GPU-based systems, using all the processing power that future high-end systems can make available within given energy constraints. Our efforts at the University of Tennessee achieved the goals set in all of the five areas identified in the proposal: 1. Communication optimal algorithms; 2. Autotuning for GPU and hybrid processors; 3. Scheduling and memory management techniques for heterogeneity and scale; 4. Fault tolerance and robustness for large scale systems; 5. Building energymore » efficiency into software foundations. The University of Tennessee’s main contributions, as proposed, were the research and software development of new algorithms for hybrid multi/many-core CPUs and GPUs, as related to two-sided factorizations and complete eigenproblem solvers, hybrid BLAS, and energy efficiency for dense, as well as sparse, operations. Furthermore, as proposed, we investigated and experimented with various techniques targeting the five main areas outlined.« less

  17. The properties of realized volatility and realized correlation: Evidence from the Indian stock market

    NASA Astrophysics Data System (ADS)

    Gkillas (Gillas), Konstantinos; Vortelinos, Dimitrios I.; Saha, Shrabani

    2018-02-01

    This paper investigates the properties of realized volatility and correlation series in the Indian stock market by employing daily data converting to monthly frequency of five different stock indices from January 2, 2006 to November 30, 2014. Using non-parametric estimation technique the properties examined include normality, long-memory, asymmetries, jumps, and heterogeneity. The realized volatility is a useful technique which provides a relatively accurate measure of volatility based on the actual variance which is beneficial for asset management in particular for non-speculative funds. The results show that realized volatility and correlation series are not normally distributed, with some evidence of persistence. Asymmetries are also evident in both volatilities and correlations. Both jumps and heterogeneity properties are significant; whereas, the former is more significant than the latter. The findings show that properties of volatilities and correlations in Indian stock market have similarities as that show in the stock markets in developed countries such as the stock market in the United States which is more prevalent for speculative business traders.

  18. Inductive Game Theory and the Dynamics of Animal Conflict

    PubMed Central

    DeDeo, Simon; Krakauer, David C.; Flack, Jessica C.

    2010-01-01

    Conflict destabilizes social interactions and impedes cooperation at multiple scales of biological organization. Of fundamental interest are the causes of turbulent periods of conflict. We analyze conflict dynamics in an monkey society model system. We develop a technique, Inductive Game Theory, to extract directly from time-series data the decision-making strategies used by individuals and groups. This technique uses Monte Carlo simulation to test alternative causal models of conflict dynamics. We find individuals base their decision to fight on memory of social factors, not on short timescale ecological resource competition. Furthermore, the social assessments on which these decisions are based are triadic (self in relation to another pair of individuals), not pairwise. We show that this triadic decision making causes long conflict cascades and that there is a high population cost of the large fights associated with these cascades. These results suggest that individual agency has been over-emphasized in the social evolution of complex aggregates, and that pair-wise formalisms are inadequate. An appreciation of the empirical foundations of the collective dynamics of conflict is a crucial step towards its effective management. PMID:20485557

  19. Inductive game theory and the dynamics of animal conflict.

    PubMed

    DeDeo, Simon; Krakauer, David C; Flack, Jessica C

    2010-05-13

    Conflict destabilizes social interactions and impedes cooperation at multiple scales of biological organization. Of fundamental interest are the causes of turbulent periods of conflict. We analyze conflict dynamics in an monkey society model system. We develop a technique, Inductive Game Theory, to extract directly from time-series data the decision-making strategies used by individuals and groups. This technique uses Monte Carlo simulation to test alternative causal models of conflict dynamics. We find individuals base their decision to fight on memory of social factors, not on short timescale ecological resource competition. Furthermore, the social assessments on which these decisions are based are triadic (self in relation to another pair of individuals), not pairwise. We show that this triadic decision making causes long conflict cascades and that there is a high population cost of the large fights associated with these cascades. These results suggest that individual agency has been over-emphasized in the social evolution of complex aggregates, and that pair-wise formalisms are inadequate. An appreciation of the empirical foundations of the collective dynamics of conflict is a crucial step towards its effective management.

  20. Processing and memory for emotional and neutral material in amyotrophic lateral sclerosis

    PubMed Central

    Cuddy, Marion; Papps, Benjamin J.; Thambisetty, Madhav; Leigh, P. Nigel; Goldstein, Laura H.

    2018-01-01

    Several studies have reported changes in emotional memory and processing in people with ALS (pwALS). In this study, we sought to analyse differences in emotional processing and memory between pwALS and healthy controls and to investigate the relationship between emotional memory and self-reported depression. Nineteen pwALS and 19 healthy controls were assessed on measures of emotional processing, emotional memory, verbal memory and depression. Although pwALS and controls did not differ significantly on measures of emotional memory, a subgroup of patients performed poorly on an emotional recognition task. With regard to emotional processing, pwALS gave significantly stronger ratings of emotional valence to positive words than to negative words. Higher ratings of emotional words were associated with better recall in controls but not pwALS. Self-reported depression and emotional processing or memory variables were not associated in either group. In conclusion, the results from this small study suggest that a subgroup of pwALS may show weakened ‘emotional enhancement’, although in the current sample this may reflect general memory impairment rather than specific changes in emotional memory. Nonetheless, different patterns of processing of emotionally-salient material by pwALS may have care and management-related implications. PMID:22873560

  1. Recall of briefly presented chess positions and its relation to chess skill.

    PubMed

    Gong, Yanfei; Ericsson, K Anders; Moxley, Jerad H

    2015-01-01

    Individual differences in memory performance in a domain of expertise have traditionally been accounted for by previously acquired chunks of knowledge and patterns. These accounts have been examined experimentally mainly in chess. The role of chunks (clusters of chess pieces recalled in rapid succession during recall of chess positions) and their relations to chess skill are, however, under debate. By introducing an independent chunk-identification technique, namely repeated-recall technique, this study identified individual chunks for particular chess players. The study not only tested chess players with increasing chess expertise, but also tested non-chess players who should not have previously acquired any chess related chunks in memory. For recall of game positions significant differences between players and non-players were found in virtually all the characteristics of chunks recalled. Size of the largest chunks also correlates with chess skill within the group of rated chess players. Further research will help us understand how these memory encodings can explain large differences in chess skill.

  2. Mood-congruent memory in daily life: evidence from interactive ambulatory monitoring.

    PubMed

    Loeffler, Simone N; Myrtek, Michael; Peper, Martin

    2013-05-01

    Evidence from the psychological laboratory indicates that emotional states tend to facilitate the encoding and retrieval of stimuli of the same emotional valence. To explore mood-congruent memory and the role of arousal in daily life, we applied a new interactive ambulatory technique. Psychophysiological arousal as indexed by non-metabolic heart rate, self-reported emotions and situational information were assessed during 24-h recordings in 70 healthy participants. The emotional state was used to trigger word list presentations on a minicomputer. Our results show that psychophysiological arousal at the time of encoding enhanced the recall of negative words in negative emotional conditions, whereas low psychophysiological arousal facilitated recall of positive words. In positive contexts, mood congruency was more prominent when arousal was low. These results demonstrate how automated experimentation with an ambulatory technique may help to assess emotional memory in real-world contexts, thus providing new methods for diverse fields of application. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Recall of Briefly Presented Chess Positions and Its Relation to Chess Skill

    PubMed Central

    Moxley, Jerad H.

    2015-01-01

    Individual differences in memory performance in a domain of expertise have traditionally been accounted for by previously acquired chunks of knowledge and patterns. These accounts have been examined experimentally mainly in chess. The role of chunks (clusters of chess pieces recalled in rapid succession during recall of chess positions) and their relations to chess skill are, however, under debate. By introducing an independent chunk-identification technique, namely repeated-recall technique, this study identified individual chunks for particular chess players. The study not only tested chess players with increasing chess expertise, but also tested non-chess players who should not have previously acquired any chess related chunks in memory. For recall of game positions significant differences between players and non-players were found in virtually all the characteristics of chunks recalled. Size of the largest chunks also correlates with chess skill within the group of rated chess players. Further research will help us understand how these memory encodings can explain large differences in chess skill. PMID:25774693

  4. Using Self-Generated Cues to Facilitate Recall: A Narrative Review

    PubMed Central

    Wheeler, Rebecca L.; Gabbert, Fiona

    2017-01-01

    We draw upon the Associative Network model of memory, as well as the principles of encoding-retrieval specificity, and cue distinctiveness, to argue that self-generated cue mnemonics offer an intuitive means of facilitating reliable recall of personally experienced events. The use of a self-generated cue mnemonic allows for the spreading activation nature of memory, whilst also presenting an opportunity to capitalize upon cue distinctiveness. Here, we present the theoretical rationale behind the use of this technique, and highlight the distinction between a self-generated cue and a self-referent cue in autobiographical memory research. We contrast this mnemonic with a similar retrieval technique, Mental Reinstatement of Context, which is recognized as the most effective mnemonic component of the Cognitive Interview. Mental Reinstatement of Context is based upon the principle of encoding-retrieval specificity, whereby the overlap between encoded information and retrieval cue predicts the likelihood of accurate recall. However, it does not incorporate the potential additional benefit of self-generated retrieval cues. PMID:29163254

  5. Individual memory change after anterior temporal lobectomy: a base rate analysis using regression-based outcome methodology.

    PubMed

    Martin, R C; Sawrie, S M; Roth, D L; Gilliam, F G; Faught, E; Morawetz, R B; Kuzniecky, R

    1998-10-01

    To characterize patterns of base rate change on measures of verbal and visual memory after anterior temporal lobectomy (ATL) using a newly developed regression-based outcome methodology that accounts for effects of practice and regression towards the mean, and to comment on the predictive utility of baseline memory measures on postoperative memory outcome. Memory change was operationalized using regression-based change norms in a group of left (n = 53) and right (n = 48) ATL patients. All patients were administered tests of episodic verbal (prose recall, list learning) and visual (figure reproduction) memory, and semantic memory before and after ATL. ATL patients displayed a wide range of memory outcome across verbal and visual memory domains. Significant performance declines were noted for 25-50% of left ATL patients on verbal semantic and episodic memory tasks, while one-third of right ATL patients displayed significant declines in immediate and delayed episodic prose recall. Significant performance improvement was noted in an additional one-third of right ATL patients on delayed prose recall. Base rate change was similar between the two ATL groups across immediate and delayed visual memory. Approximately one-fourth of all patients displayed clinically meaningful losses on the visual memory task following surgery. Robust relationships between preoperative memory measures and nonstandardized change scores were attenuated or reversed using standardized memory outcome techniques. Our results demonstrated substantial group variability in memory outcome for ATL patients. These results extend previous research by incorporating known effects of practice and regression to the mean when addressing meaningful neuropsychological change following epilepsy surgery. Our findings also suggest that future neuropsychological outcome studies should take steps towards controlling for regression-to-the-mean before drawing predictive conclusions.

  6. The optimal timing of stimulation to induce long-lasting positive effects on episodic memory in physiological aging.

    PubMed

    Manenti, Rosa; Sandrini, Marco; Brambilla, Michela; Cotelli, Maria

    2016-09-15

    Episodic memory displays the largest degree of age-related decline. A noninvasive brain stimulation technique that can be used to modulate memory in physiological aging is transcranial Direct Current Stimulation (tDCS). However, an aspect that has not been adequately investigated in previous studies is the optimal timing of stimulation to induce long-lasting positive effects on episodic memory function. Our previous studies showed episodic memory enhancement in older adults when anodal tDCS was applied over the left lateral prefrontal cortex during encoding or after memory consolidation with or without a contextual reminder. Here we directly compared the two studies to explore which of the tDCS protocols would induce longer-lasting positive effects on episodic memory function in older adults. In addition, we aimed to determine whether subjective memory complaints would be related to the changes in memory performance (forgetting) induced by tDCS, a relevant issue in aging research since individuals with subjective memory complaints seem to be at higher risk of later memory decline. The results showed that anodal tDCS applied after consolidation with a contextual reminder induced longer-lasting positive effects on episodic memory, conceivably through reconsolidation, than anodal tDCS during encoding. Furthermore, we reported, providing new data, a moderate negative correlation between subjective memory complaints and forgetting when anodal tDCS was applied after consolidation with a contextual reminder. This study sheds light on the best-suited timing of stimulation to induce long-lasting positive effects on memory function and might help the clinicians to select the most effective tDCS protocol to prevent memory decline. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Role of indigenous herbs in the management of Alzheimer's disease

    PubMed Central

    Nishteswar, K.; Joshi, Hemang; Karra, Rahul Dutt

    2014-01-01

    Ageing is a natural phenomenon and decline of physiological and structural changes are incurable in advancing years of human life. When such degenerative changes occur in the brain they may lead to dementia and other memory related conditions. The Ayurvedic classics identified the importance of higher faculties dealing with memory and introduced a separate group of drugs namely Medhya Rasayanas. Regular intake of such drugs will help to prevent the onset of degenerative changes in the brain prematurely. Ayurveda can play a useful role in the management of such geriatric conditions. The current review has been done with a view to update documented Ayurvedic therapeutic modalities for certain geriatric conditions suggested by Ayurvedic classics in the management of diseases called Vātavyādhi (nervous system disorders), which also include conditions related to memory functions. Recent studies have started validating the claims recorded in Ayurvedic texts. The pathogenesis and remedies for Vātavyādhi documented in Ayurvedic classics have been reviewed with special emphasis on disorders related to dementia. A review of recent researches on the herbs mentioned in management of vāta disorders including dementia have been done to understand their role in management of Alzheimer's disease (AD). There are many herbs of ethno-medicinal source studied experimentally for their potential in treatment of AD. A judicious combination of modern research methodology and Ayurvedic principles could go a long way in the management and care of AD which is going to be a heavy burden on the society in the future. PMID:25737604

  8. Simplifying and speeding the management of intra-node cache coherence

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  9. The NEEDS Data Base Management and Archival Mass Memory System

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.

    1980-01-01

    A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.

  10. Managing coherence via put/get windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A; Chen, Dong; Coteus, Paul W

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less

  11. Dopamine D1 signaling organizes network dynamics underlying working memory.

    PubMed

    Roffman, Joshua L; Tanner, Alexandra S; Eryilmaz, Hamdi; Rodriguez-Thompson, Anais; Silverstein, Noah J; Ho, New Fei; Nitenson, Adam Z; Chonde, Daniel B; Greve, Douglas N; Abi-Dargham, Anissa; Buckner, Randy L; Manoach, Dara S; Rosen, Bruce R; Hooker, Jacob M; Catana, Ciprian

    2016-06-01

    Local prefrontal dopamine signaling supports working memory by tuning pyramidal neurons to task-relevant stimuli. Enabled by simultaneous positron emission tomography-magnetic resonance imaging (PET-MRI), we determined whether neuromodulatory effects of dopamine scale to the level of cortical networks and coordinate their interplay during working memory. Among network territories, mean cortical D1 receptor densities differed substantially but were strongly interrelated, suggesting cross-network regulation. Indeed, mean cortical D1 density predicted working memory-emergent decoupling of the frontoparietal and default networks, which respectively manage task-related and internal stimuli. In contrast, striatal D1 predicted opposing effects within these two networks but no between-network effects. These findings specifically link cortical dopamine signaling to network crosstalk that redirects cognitive resources to working memory, echoing neuromodulatory effects of D1 signaling on the level of cortical microcircuits.

  12. Effects of Transcranial Direct Current Stimulation (tDCS) on Human Memory.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Trumbo, Michael Christopher Stefan

    Training a person in a new knowledge base or skill set is extremely time consuming and costly, particularly in highly specialized domains such as the military and the intelligence community. Recent research in cognitive neuroscience has suggested that a technique called transcranial direct current stimulation (tDCS) has the potential to revolutionize training by enabling learners to acquire new skills faster, more efficiently, and more robustly (Bullard et al., 2011). In this project, we tested the effects of tDCS on two types of memory performance that are critical for learning new skills: associative memory and working memory. Associative memory is memorymore » for the relationship between two items or events. It forms the foundation of all episodic memories, so enhancing associative memory could provide substantial benefits to the speed and robustness of learning new information. We tested the effects of tDCS on associative memory, using a real-world associative memory task: remembering the links between faces and names. Working memory refers to the amount of information that can be held in mind and processed at one time, and it forms the basis for all higher-level cognitive processing. We investigated the degree of transfer between various working memory tasks (the N-back task as a measure of verbal working memory, the rotation-span task as a measure of visuospatial working memory, and Raven's progressive matrices as a measure of fluid intelligence) in order to determine if tDCS-induced facilitation of performance is task-specific or general.« less

  13. A critical role of the human hippocampus in an electrophysiological measure of implicit memory

    PubMed Central

    Addante, Richard James

    2015-01-01

    The hippocampus has traditionally been thought to be critical for conscious explicit memory but not necessary for unconscious implicit memory processing. In a recent study of a group of mild amnesia patients with evidence of MTL damage limited to the hippocampus, subjects were tested on a direct test of item recognition confidence while electroencephalogram (EEG) was acquired, and revealed intact measures of explicit memory from 400–600ms (mid-frontal old-new effect, FN400). The current investigation re-analyzed this data to study event-related potentials (ERPs) of implicit memory, using a recently developed procedure that eliminated declarative memory differences. Prior ERP findings from this technique were first replicated in two independent matched control groups, which exhibited reliable implicit memory effects in posterior scalp regions from 400–600 msec, which were topographically dissociated from the explicit memory effects of familiarity. However, patients were found to be dramatically impaired in implicit memory effects relative to control subjects, as quantified by a reliable condition × group interaction. Several control analysis were conducted to consider alternative factors that could account for the results, including outliers, sample size, age, or contamination by explicit memory, and each of these factors were systematically ruled out. Results suggest that the hippocampus plays a fundamental role in aspects of memory processing that is beyond conscious awareness. The current findings therefore indicate that both memory systems of implicit and explicit memory may rely upon the same neural structures – but function in different physiological ways. PMID:25562828

  14. Investigation of a high speed data handling system for use with multispectral aircraft scanners

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Meredith, B. D.

    1978-01-01

    A buffer memory data handling technique for use with multispectral aircraft scanners is presented which allows digital data generated at high data rates to be recorded on magnetic tape. A digital memory is used to temporarily store the data for subsequent recording at slower rates during the passive time of the scan line, thereby increasing the maximum data rate recording capability over real-time recording. Three possible implementations are described and the maximum data rate capability is defined in terms of the speed capability of the key hardware components. The maximum data rates can be used to define the maximum ground resolution achievable by a multispectral aircraft scanner using conventional data handling techniques.

  15. Error recovery in shared memory multiprocessors using private caches

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1990-01-01

    The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.

  16. Knowledge Management: A Skeptic's Guide

    NASA Technical Reports Server (NTRS)

    Linde, Charlotte

    2006-01-01

    A viewgraph presentation discussing knowledge management is shown. The topics include: 1) What is Knowledge Management? 2) Why Manage Knowledge? The Presenting Problems; 3) What Gets Called Knowledge Management? 4) Attempts to Rethink Assumptions about Knowledgs; 5) What is Knowledge? 6) Knowledge Management and INstitutional Memory; 7) Knowledge Management and Culture; 8) To solve a social problem, it's easier to call for cultural rather than organizational change; 9) Will the Knowledge Management Effort Succeed? and 10) Backup: Metrics for Valuing Intellectural Capital i.e. Knowledge.

  17. Design, Manufacturing, and In Vitro Testing of a Patient-Specific Shape-Memory Expander for Nose Reconstruction With Forehead Flap Technique.

    PubMed

    Borghi, Alessandro; Rodgers, Will; Schievano, Silvia; Ponniah, Allan; O'Hara, Justine; Jeelani, Owase; Dunaway, David

    2016-01-01

    Forehead skin is widely acknowledged as a good donor site for total nasal reconstruction, thanks to its matching color, texture, and abundant vascularity. The forehead flap technique uses an axial pattern flap forehead skin to replace missing nasal tissue. To increase the amount of available tissue and reduce the size of the tissue defect after flap mobilization, tissue expanders may be used. Although this is a relatively established technique, limitations include reduced moldability of the forehead skin (which is thicker than the nasal skin), and the need for multiple sessions of expansion to achieve a sufficient yield to close the forehead.Shape-memory metals, such as nitinol, can be programmed to "remember" complex shapes. In this work, the methodology for producing a prototype of nitinol tissue expander able to mold the skin in a predetermined patient-specific skin shape is described. A realistic nose mold was manufactured using metal rapid prototyping; nitinol sheet and mesh were molded into nose-shape constructs, having hyperelastic as well as shape-memory capability. Computed tomography scanning was performed to assess the ability of the structure to regain its shape after phase transformation upon cooling within 2% of initial dimensions. The prototypes were implanted in a pig forehead to test its ability to impose a nose shape to the forehead skin.The shape-memory properties of nitinol offer the possibility of producing bespoke tissue expanders able to deliver complex, precisely designed skin envelopes. The hyperelastic properties of nitinol allow constant preprogrammed expansion forces to be generated throughout the expansion process.

  18. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  19. A principled approach to the measurement of situation awareness in commercial aviation

    NASA Technical Reports Server (NTRS)

    Tenney, Yvette J.; Adams, Marilyn Jager; Pew, Richard W.; Huggins, A. W. F.; Rogers, William H.

    1992-01-01

    The issue of how to support situation awareness among crews of modern commercial aircraft is becoming especially important with the introduction of automation in the form of sophisticated flight management computers and expert systems designed to assist the crew. In this paper, cognitive theories are discussed that have relevance for the definition and measurement of situation awareness. These theories suggest that comprehension of the flow of events is an active process that is limited by the modularity of attention and memory constraints, but can be enhanced by expert knowledge and strategies. Three implications of this perspective for assessing and improving situation awareness are considered: (1) Scenario variations are proposed that tax awareness by placing demands on attention; (2) Experimental tasks and probes are described for assessing the cognitive processes that underlie situation awareness; and (3) The use of computer-based human performance models to augment the measures of situation awareness derived from performance data is explored. Finally, two potential example applications of the proposed assessment techniques are described, one concerning spatial awareness using wide field of view displays and the other emphasizing fault management in aircraft systems.

  20. A Brain System for Auditory Working Memory.

    PubMed

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

Top