Sample records for cache memory means

  1. Optoelectronic-cache memory system architecture.

    PubMed

    Chiarulli, D M; Levitan, S P

    1996-05-10

    We present an investigation of the architecture of an optoelectronic cache that can integrate terabit optical memories with the electronic caches associated with high-performance uniprocessors and multiprocessors. The use of optoelectronic-cache memories enables these terabit technologies to provide transparently low-latency secondary memory with frame sizes comparable with disk pages but with latencies that approach those of electronic secondary-cache memories. This enables the implementation of terabit memories with effective access times comparable with the cycle times of current microprocessors. The cache design is based on the use of a smart-pixel array and combines parallel free-space optical input-output to-and-from optical memory with conventional electronic communication to the processor caches. This cache and the optical memory system to which it will interface provide a large random-access memory space that has a lower overall latency than that of magnetic disks and disk arrays. In addition, as a consequence of the high-bandwidth parallel input-output capabilities of optical memories, fault service times for the optoelectronic cache are substantially less than those currently achievable with any rotational media.

  2. Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing

    NASA Astrophysics Data System (ADS)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Wang, Feng

    2018-04-01

    Fog computing requires a large main memory capacity to decrease latency and increase the Quality of Service (QoS). However, dynamic random access memory (DRAM), the commonly used random access memory, cannot be included into a fog computing system due to its high consumption of power. In recent years, non-volatile memories (NVM) such as Phase-Change Memory (PCM) and Spin-transfer torque RAM (STT-RAM) with their low power consumption have emerged to replace DRAM. Moreover, the currently proposed hybrid main memory, consisting of both DRAM and NVM, have shown promising advantages in terms of scalability and power consumption. However, the drawbacks of NVM, such as long read/write latency give rise to potential problems leading to asymmetric cache misses in the hybrid main memory. Current last level cache (LLC) policies are based on the unified miss cost, and result in poor performance in LLC and add to the cost of using NVM. In order to minimize the cache miss cost in the hybrid main memory, we propose a cost aware cache replacement policy (CACRP) that reduces the number of cache misses from NVM and improves the cache performance for a hybrid memory system. Experimental results show that our CACRP behaves better in LLC performance, improving performance up to 43.6% (15.5% on average) compared to LRU.

  3. Cache write generate for parallel image processing on shared memory architectures.

    PubMed

    Wittenbrink, C M; Somani, A K; Chen, C H

    1996-01-01

    We investigate cache write generate, our cache mode invention. We demonstrate that for parallel image processing applications, the new mode improves main memory bandwidth, CPU efficiency, cache hits, and cache latency. We use register level simulations validated by the UW-Proteus system. Many memory, cache, and processor configurations are evaluated.

  4. Error recovery in shared memory multiprocessors using private caches

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1990-01-01

    The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.

  5. Episodic-like memory during cache recovery by scrub jays.

    PubMed

    Clayton, N S; Dickinson, A

    1998-09-17

    The recollection of past experiences allows us to recall what a particular event was, and where and when it occurred, a form of memory that is thought to be unique to humans. It is known, however, that food-storing birds remember the spatial location and contents of their caches. Furthermore, food-storing animals adapt their caching and recovery strategies to the perishability of food stores, which suggests that they are sensitive to temporal factors. Here we show that scrub jays (Aphelocoma coerulescens) remember 'when' food items are stored by allowing them to recover perishable 'wax worms' (wax-moth larvae) and non-perishable peanuts which they had previously cached in visuospatially distinct sites. Jays searched preferentially for fresh wax worms, their favoured food, when allowed to recover them shortly after caching. However, they rapidly learned to avoid searching for worms after a longer interval during which the worms had decayed. The recovery preference of jays demonstrates memory of where and when particular food items were cached, thereby fulfilling the behavioural criteria for episodic-like memory in non-human animals.

  6. Performance of defect-tolerant set-associative cache memories

    NASA Technical Reports Server (NTRS)

    Frenzel, J. F.

    1991-01-01

    The increased use of on-chip cache memories has led researchers to investigate their performance in the presence of manufacturing defects. Several techniques for yield improvement are discussed and results are presented which indicate that set-associativity may be used to provide defect tolerance as well as improve the cache performance. Tradeoffs between several cache organizations and replacement strategies are investigated and it is shown that token-based replacement may be a suitable alternative to the widely-used LRU strategy.

  7. Cache-based error recovery for shared memory multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  8. SEX, ESTRADIOL, AND SPATIAL MEMORY IN A FOOD-CACHING CORVID

    PubMed Central

    Rensel, Michelle A.; Ellis, Jesse M.S.; Harvey, Brigit; Schlinger, Barney A.

    2015-01-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. PMID:26232613

  9. Do Clark's nutcrackers demonstrate what-where-when memory on a cache-recovery task?

    PubMed

    Gould, Kristy L; Ort, Amy J; Kamil, Alan C

    2012-01-01

    What-where-when (WWW) memory during cache recovery was investigated in six Clark's nutcrackers. During caching, both red- and blue-colored pine seeds were cached by the birds in holes filled with sand. Either a short (3 day) retention interval (RI) or a long (9 day) RI was followed by a recovery session during which caches were replaced with either a single seed or wooden bead depending upon the color of the cache and length of the retention interval. Knowledge of what was in the cache (seed or bead), where it was located, and when the cache had been made (3 or 9 days ago) were the three WWW memory components under investigation. Birds recovered items (bead or seed) at above chance levels, demonstrating accurate spatial memory. They also recovered seeds more than beads after the long RI, but not after the short RI, when they recovered seeds and beads equally often. The differential recovery after the long RI demonstrates that nutcrackers may have the capacity for WWW memory during this task, but it is not clear why it was influenced by RI duration.

  10. Sex, estradiol, and spatial memory in a food-caching corvid.

    PubMed

    Rensel, Michelle A; Ellis, Jesse M S; Harvey, Brigit; Schlinger, Barney A

    2015-09-01

    Estrogens significantly impact spatial memory function in mammalian species. Songbirds express the estrogen synthetic enzyme aromatase at relatively high levels in the hippocampus and there is evidence from zebra finches that estrogens facilitate performance on spatial learning and/or memory tasks. It is unknown, however, whether estrogens influence hippocampal function in songbirds that naturally exhibit memory-intensive behaviors, such as cache recovery observed in many corvid species. To address this question, we examined the impact of estradiol on spatial memory in non-breeding Western scrub-jays, a species that routinely participates in food caching and retrieval in nature and in captivity. We also asked if there were sex differences in performance or responses to estradiol. Utilizing a combination of an aromatase inhibitor, fadrozole, with estradiol implants, we found that while overall cache recovery rates were unaffected by estradiol, several other indices of spatial memory, including searching efficiency and efficiency to retrieve the first item, were impaired in the presence of estradiol. In addition, males and females differed in some performance measures, although these differences appeared to be a consequence of the nature of the task as neither sex consistently out-performed the other. Overall, our data suggest that a sustained estradiol elevation in a food-caching bird impairs some, but not all, aspects of spatial memory on an innate behavioral task, at times in a sex-specific manner. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Long-term moderate elevation of corticosterone facilitates avian food-caching behaviour and enhances spatial memory.

    PubMed

    Pravosudov, Vladimir V

    2003-12-22

    It is widely assumed that chronic stress and corresponding chronic elevations of glucocorticoid levels have deleterious effects on animals' brain functions such as learning and memory. Some animals, however, appear to maintain moderately elevated levels of glucocorticoids over long periods of time under natural energetically demanding conditions, and it is not clear whether such chronic but moderate elevations may be adaptive. I implanted wild-caught food-caching mountain chickadees (Poecile gambeli), which rely at least in part on spatial memory to find their caches, with 90-day continuous time-release corticosterone pellets designed to approximately double the baseline corticosterone levels. Corticosterone-implanted birds cached and consumed significantly more food and showed more efficient cache recovery and superior spatial memory performance compared with placebo-implanted birds. Thus, contrary to prevailing assumptions, long-term moderate elevations of corticosterone appear to enhance spatial memory in food-caching mountain chickadees. These results suggest that moderate chronic elevation of corticosterone may serve as an adaptation to unpredictable environments by facilitating feeding and food-caching behaviour and by improving cache-retrieval efficiency in food-caching birds.

  12. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  13. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  14. The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    NASA Astrophysics Data System (ADS)

    Yeo, Myung Ho; Min, Young Soo; Bok, Kyoung Soo; Yoo, Jae Soo

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  15. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    NASA Astrophysics Data System (ADS)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  16. The relationship between dominance, corticosterone, memory, and food caching in mountain chickadees (Poecile gambeli).

    PubMed

    Pravosudov, Vladimir V; Mendoza, Sally P; Clayton, Nicola S

    2003-08-01

    It has been hypothesized that in avian social groups subordinate individuals should maintain more energy reserves than dominants, as an insurance against increased perceived risk of starvation. Subordinates might also have elevated baseline corticosterone levels because corticosterone is known to facilitate fattening in birds. Recent experiments showed that moderately elevated corticosterone levels resulting from unpredictable food supply are correlated with enhanced cache retrieval efficiency and more accurate performance on a spatial memory task. Given the correlation between corticosterone and memory, a further prediction is that subordinates might be more efficient at cache retrieval and show more accurate performance on spatial memory tasks. We tested these predictions in dominant-subordinate pairs of mountain chickadees (Poecile gambeli). Each pair was housed in the same cage but caching behavior was tested individually in an adjacent aviary to avoid the confounding effects of small spaces in which birds could unnaturally and directly influence each other's behavior. In sharp contrast to our hypothesis, we found that subordinate chickadees cached less food, showed less efficient cache retrieval, and performed significantly worse on the spatial memory task than dominants. Although the behavioral differences could have resulted from social stress of subordination, and dominant birds reached significantly higher levels of corticosterone during their response to acute stress compared to subordinates, there were no significant differences between dominants and subordinates in baseline levels or in the pattern of adrenocortical stress response. We find no evidence, therefore, to support the hypothesis that subordinate mountain chickadees maintain elevated baseline corticosterone levels whereas lower caching rates and inferior cache retrieval efficiency might contribute to reduced survival of subordinates commonly found in food-caching parids.

  17. Don't make cache too complex: A simple probability-based cache management scheme for SSDs.

    PubMed

    Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

  18. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  19. Don’t make cache too complex: A simple probability-based cache management scheme for SSDs

    PubMed Central

    Cho, Sangyeun; Choi, Jongmoo

    2017-01-01

    Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897

  20. Reader set encoding for directory of shared cache memory in multiprocessor system

    DOEpatents

    Ahn, Dnaiel; Ceze, Luis H.; Gara, Alan; Ohmacht, Martin; Xiaotong, Zhuang

    2014-06-10

    In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line.

  1. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX)

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-01-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  2. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  3. A cache-aided multiprocessor rollback recovery scheme

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent

    1989-01-01

    This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.

  4. Authenticating cache

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tyler Barratt; Urrea, Jorge Mario

    2012-06-01

    The aim of the Authenticating Cache architecture is to ensure that machine instructions in a Read Only Memory (ROM) are legitimate from the time the ROM image is signed (immediately after compilation) to the time they are placed in the cache for the processor to consume. The proposed architecture allows the detection of ROM image modifications during distribution or when it is loaded into memory. It also ensures that modified instructions will not execute in the processor-as the cache will not be loaded with a page that fails an integrity check. The authenticity of the instruction stream can also bemore » verified in this architecture. The combination of integrity and authenticity assurance greatly improves the security profile of a system.« less

  5. Fault Tolerant Cache Schemes

    NASA Astrophysics Data System (ADS)

    Tu, H.-Yu.; Tasneem, Sarah

    Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.

  6. Effects of demanding foraging conditions on cache retrival accuracy in food-caching mountain chickadees (Poecile gambeli).

    PubMed

    Pravosudov, V V; Clayton, N S

    2001-02-22

    Birds rely, at least in part, on spatial memory for recovering previously hidden caches but accurate cache recovery may be more critical for birds that forage in harsh conditions where the food supply is limited and unpredictable. Failure to find caches in these conditions may potentially result in death from starvation. In order to test this hypothesis we compared the cache recovery behaviour of 24 wild-caught mountain chickadees (Poecile gambeli), half of which were maintained on a limited and unpredictable food supply while the rest were maintained on an ad libitum food supply for 60 days. We then tested their cache retrieval accuracy by allowing birds from both groups to cache seeds in the experimental room and recover them 5 hours later. Our results showed that birds maintained on a limited and unpredictable food supply made significantly fewer visits to non-cache sites when recovering their caches compared to birds maintained on ad libitum food. We found the same difference in performance in two versions of a one-trial associative learning task in which the birds had to rely on memory to find previously encountered hidden food. In a non-spatial memory version of the task, in which the baited feeder was clearly marked, there were no significant differences between the two groups. We therefore concluded that the two groups differed in their efficiency at cache retrieval. We suggest that this difference is more likely to be attributable to a difference in memory (encoding or recall) than to a difference in their motivation to search for hidden food, although the possibility of some motivational differences still exists. Overall, our results suggest that demanding foraging conditions favour more accurate cache retrieval in food-caching birds.

  7. Cache as point of coherence in multiprocessor system

    DOEpatents

    Blumrich, Matthias A.; Ceze, Luis H.; Chen, Dong; Gara, Alan; Heidelberger, Phlip; Ohmacht, Martin; Steinmacher-Burow, Burkhard; Zhuang, Xiaotong

    2016-11-29

    In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory.

  8. Experimental evaluation of multiprocessor cache-based error recovery

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. K.

    1991-01-01

    Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.

  9. Binary mesh partitioning for cache-efficient visualization.

    PubMed

    Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno

    2010-01-01

    One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.

  10. Accurate low-cost methods for performance evaluation of cache memory systems

    NASA Technical Reports Server (NTRS)

    Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.

    1988-01-01

    Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.

  11. A test of the adaptive specialization hypothesis: population differences in caching, memory, and the hippocampus in black-capped chickadees (Poecile atricapilla).

    PubMed

    Pravosudov, Vladimir V; Clayton, Nicola S

    2002-08-01

    To test the hypothesis that accurate cache recovery is more critical for birds that live in harsh conditions where the food supply is limited and unpredictable, the authors compared food caching, memory, and the hippocampus of black-capped chickadees (Poecile atricapilla) from Alaska and Colorado. Under identical laboratory conditions, Alaska chickadees (a) cached significantly more food; (b) were more efficient at cache recovery: (c) performed more accurately on one-trial associative learning tasks in which birds had to rely on spatial memory, but did not differ when tested on a nonspatial version of this task; and (d) had significantly larger hippocampal volumes containing more neurons compared with Colorado chickadees. The results support the hypothesis that these population differences may reflect adaptations to a harsh environment.

  12. Formal verification of an MMU and MMU cache

    NASA Technical Reports Server (NTRS)

    Schubert, E. T.

    1991-01-01

    We describe the formal verification of a hardware subsystem consisting of a memory management unit and a cache. These devices are verified independently and then shown to interact correctly when composed. The MMU authorizes memory requests and translates virtual addresses to real addresses. The cache improves performance by maintaining a LRU (least recently used) list from the memory resident segment table.

  13. Dynamically programmable cache

    NASA Astrophysics Data System (ADS)

    Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas

    1998-10-01

    Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).

  14. Temperature and leakage aware techniques to improve cache reliability

    NASA Astrophysics Data System (ADS)

    Akaaboune, Adil

    Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data

  15. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  16. Josephson 4 K-bit cache memory design for a prototype signal processor. I - General overview

    NASA Astrophysics Data System (ADS)

    Henkels, W. H.; Geppert, L. M.; Kadlec, J.; Epperlein, P. W.; Beha, H.

    1985-09-01

    In the early stages of thg Josephson computer project conducted at an American computer company, it was recognized that a very fast cache memory was needed to complement Josephson logic. A subnanosecond access time memory was implemented experimentally on the basis of a 2.5-micron Pb-alloy technology. It was then decided to switch over to a Nb-base-electrode technology with the objective to alleviate problems with the long-term reliability and aging of Pb-based junctions. The present paper provides a general overview of the status of a 4 x 1 K-bit Josephson cache design employing a 2.5-micron Nb-edge-junction technology. Attention is given to the fabrication process and its implications, aspects of circuit design methodology, an overview of system environment and chip components, design changes and status, and various difficulties and uncertainties.

  17. Modifying dementia risk and trajectories of cognitive decline in aging: the Cache County Memory Study.

    PubMed

    Welsh-Bohmer, Kathleen A; Breitner, John C S; Hayden, Kathleen M; Lyketsos, Constantine; Zandi, Peter P; Tschanz, Joann T; Norton, Maria C; Munger, Ron

    2006-07-01

    The Cache County Study of Memory, Health, and Aging, more commonly referred to as the "Cache County Memory Study (CCMS)" is a longitudinal investigation of aging and Alzheimer's disease (AD) based in an exceptionally long-lived population residing in northern Utah. The study begun in 1994 has followed an initial cohort of 5,092 older individuals (many over age 84) and has examined the development of cognitive impairment and dementia in relation to genetic and environmental antecedents. This article summarizes the major contributions of the CCMS towards the understanding of mild cognitive disorders and AD across the lifespan, underscoring the role of common health exposures in modifying dementia risk and trajectories of cognitive change. The study now in its fourth wave of ascertainment illustrates the role of population-based approaches in informing testable models of cognitive aging and Alzheimer's disease.

  18. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    NASA Astrophysics Data System (ADS)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  19. Store operations to maintain cache coherence

    DOEpatents

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  20. Population genetic structure and its implications for adaptive variation in memory and the hippocampus on a continental scale in food-caching black-capped chickadees.

    PubMed

    Pravosudov, V V; Roth, T C; Forister, M L; Ladage, L D; Burg, T M; Braun, M J; Davidson, B S

    2012-09-01

    Food-caching birds rely on stored food to survive the winter, and spatial memory has been shown to be critical in successful cache recovery. Both spatial memory and the hippocampus, an area of the brain involved in spatial memory, exhibit significant geographic variation linked to climate-based environmental harshness and the potential reliance on food caches for survival. Such geographic variation has been suggested to have a heritable basis associated with differential selection. Here, we ask whether population genetic differentiation and potential isolation among multiple populations of food-caching black-capped chickadees is associated with differences in memory and hippocampal morphology by exploring population genetic structure within and among groups of populations that are divergent to different degrees in hippocampal morphology. Using mitochondrial DNA and 583 AFLP loci, we found that population divergence in hippocampal morphology is not significantly associated with neutral genetic divergence or geographic distance, but instead is significantly associated with differences in winter climate. These results are consistent with variation in a history of natural selection on memory and hippocampal morphology that creates and maintains differences in these traits regardless of population genetic structure and likely associated gene flow. Published 2012. This article is a US Government work and is in the public domain in the USA.

  1. Corvid caching: Insights from a cognitive model.

    PubMed

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2011-07-01

    Caching and recovery of food by corvids is well-studied, but some ambiguous results remain. To help clarify these, we built a computational cognitive model. It is inspired by similar models built for humans, and it assumes that memory strength depends on frequency and recency of use. We compared our model's behavior to that of real birds in previously published experiments. Our model successfully replicated the outcomes of two experiments on recovery behavior and two experiments on cache site choice. Our "virtual birds" reproduced declines in recovery accuracy across sessions, revisits to previously emptied cache sites, a lack of correlation between caching and recovery order, and a preference for caching in safe locations. The model also produced two new explanations. First, that Clark's nutcrackers may become less accurate as recovery progresses not because of differential memory for different cache sites, as was once assumed, but because of chance effects. And second, that Western scrub jays may choose their cache sites not on the basis of negative recovery experiences only, as was previously thought, but on the basis of positive recovery experiences instead. Alternatively, both "punishment" and "reward" may be playing a role. We conclude with a set of new insights, a testable prediction, and directions for future work. PsycINFO Database Record (c) 2011 APA, all rights reserved

  2. Rapid effects of corticosterone on cache recovery in mountain chickadees (Parus gambeli).

    PubMed

    Saldanha, C J; Schlinger, B A; Clayton, N S

    2000-03-01

    Environmental perturbations increase adrenal activity in several vertebrates. Increases in corticosterone may serve as a proximate trigger whereby organisms can rapidly adapt their behavior to survive environmental fluctuations. In food-caching songbirds, inclement weather may present the need to alter caching and/or retrieval behaviors to ensure food supplies. We hypothesized that corticosterone may increase the rate of caching and/or retrieval behaviors in the mountain chickadee, a food-storing songbird, and tested if these potential effects were mediated by alterations in appetite, activity, or memory for cache sites. Corticosterone or vehicle was administered to subjects 5 min prior to either caching or recovery in a naturalistic laboratory paradigm during which we recorded the number of caching events, sites visited, and seeds eaten (caching) or caches recovered, total sites visited, cache-related visits, and non-cache-related visits (recovery). Data were analyzed using nested ANOVA for treatment within sequential trial. There was no effect on any caching behaviors following treatment. However, birds treated with corticosterone during retrieval recovered more seeds and tended to visit more cache-related sites than did controls. Since groups did not differ in the number of seeds eaten or the total number of sites visited, it seems unlikely that corticosterone affected appetite or activity. Rapid surges in corticosterone may increase the efficacy of an underlying memory process for cache sites which is reflected in higher cache recovery in corticosterone-treated birds than in controls. Thus, rapid alterations in plasma corticosterone following environmental change may alter memory-reliant behaviors which promote survival in the food-caching mountain chickadee. Copyright 2000 Academic Press.

  3. A population study of Alzheimer's disease: findings from the Cache County Study on Memory, Health, and Aging.

    PubMed

    Tschanz, Joann T; Treiber, Katherine; Norton, Maria C; Welsh-Bohmer, Kathleen A; Toone, Leslie; Zandi, Peter P; Szekely, Christine A; Lyketsos, Constantine; Breitner, John C S

    2005-01-01

    There are several population-based studies of aging, memory, and dementia being conducted worldwide. Of these, the Cache County Study on Memory, Health and Aging is noteworthy for its large number of "oldest-old" members. This study, which has been following an initial cohort of 5,092 seniors since 1995, has reported among its major findings the role of the Apolipoprotein E gene on modifying the risk for Alzheimer's disease (AD) in males and females and identifying pharmacologic compounds that may act to reduce AD risk. This article summarizes the major findings of the Cache County study to date, describes ongoing investigations, and reports preliminary analyses on the outcome of the oldest-old in this population, the subgroup of participants who were over age 84 at the study's inception.

  4. Pilfering Eurasian jays use visual and acoustic information to locate caches.

    PubMed

    Shaw, Rachael C; Clayton, Nicola S

    2014-11-01

    Pilfering corvids use observational spatial memory to accurately locate caches that they have seen another individual make. Accordingly, many corvid cache-protection strategies limit the transfer of visual information to potential thieves. Eurasian jays (Garrulus glandarius) employ strategies that reduce the amount of visual and auditory information that is available to competitors. Here, we test whether or not the jays recall and use both visual and auditory information when pilfering other birds' caches. When jays had no visual or acoustic information about cache locations, the proportion of available caches that they found did not differ from the proportion expected if jays were searching at random. By contrast, after observing and listening to a conspecific caching in gravel or sand, jays located a greater proportion of caches, searched more frequently in the correct substrate type and searched in fewer empty locations to find the first cache than expected. After only listening to caching in gravel and sand, jays also found a larger proportion of caches and searched in the substrate type where they had heard caching take place more frequently than expected. These experiments demonstrate that Eurasian jays possess observational spatial memory and indicate that pilfering jays may gain information about cache location merely by listening to caching. This is the first evidence that a corvid may use recalled acoustic information to locate and pilfer caches.

  5. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be

  6. Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.

  7. Corvid re-caching without 'theory of mind': a model.

    PubMed

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K

    2012-01-01

    Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  8. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning

    PubMed Central

    Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron

    2014-01-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474

  9. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.

    PubMed

    Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron

    2014-05-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.

  10. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    NASA Astrophysics Data System (ADS)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  11. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  12. A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong

    Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memory technologies. To highlight theirmore » similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.« less

  13. The Cache County Study on Memory in Aging: Factors Affecting Risk of Alzheimer's disease and its Progression after Onset

    PubMed Central

    Tschanz, JoAnn T.; Norton, Maria C.; Zandi, Peter P.; Lyketsos, Constantine G.

    2014-01-01

    The Cache County Study on Memory in Aging is a longitudinal, population-based study of Alzheimer's disease (AD) and other dementias. Initiated in 1995 and extending to 2013, the study has followed over 5,000 elderly residents of Cache County, Utah (USA) for over twelve years. Achieving a 90% participation rate at enrollment, and spawning two ancillary projects, the study has contributed to the literature on genetic, psychosocial and environmental risk factors for AD, late life cognitive decline, and the clinical progression of dementia after its onset. This paper describes the major study contributions to the literature on AD and dementia. PMID:24423221

  14. The Cache County Study on Memory in Aging: factors affecting risk of Alzheimer's disease and its progression after onset.

    PubMed

    Tschanz, Joann T; Norton, Maria C; Zandi, Peter P; Lyketsos, Constantine G

    2013-12-01

    The Cache County Study on Memory in Aging is a longitudinal, population-based study of Alzheimer's disease (AD) and other dementias. Initiated in 1995 and extending to 2013, the study has followed over 5,000 elderly residents of Cache County, Utah (USA) for over twelve years. Achieving a 90% participation rate at enrolment, and spawning two ancillary projects, the study has contributed to the literature on genetic, psychosocial and environmental risk factors for AD, late-life cognitive decline, and the clinical progression of dementia after its onset. This paper describes the major study contributions to the literature on AD and dementia.

  15. Addressing Inter-set Write-Variation for Improving Lifetime of Non-Volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S

    We propose a technique which minimizes inter-set write variation in NVM caches for improving its lifetime. Our technique uses cache coloring scheme to add a software-controlled mapping layer between groups of physical pages (called memory regions) and cache sets. Periodically, the number of writes to different colors of the cache is computed and based on this result, the mapping of a few colors is changed to channel the write traffic to least utilized cache colors. This change helps to achieve wear-leveling.

  16. Problems faced by food-caching corvids and the evolution of cognitive solutions

    PubMed Central

    Grodzinski, Uri; Clayton, Nicola S.

    2010-01-01

    The scatter hoarding of food, or caching, is a widespread and well-studied behaviour. Recent experiments with caching corvids have provided evidence for episodic-like memory, future planning and possibly mental attribution, all cognitive abilities that were thought to be unique to humans. In addition to the complexity of making flexible, informed decisions about caching and recovering, this behaviour is underpinned by a motivationally controlled compulsion to cache. In this review, we shall first discuss the compulsive side of caching both during ontogeny and in the caching behaviour of adult corvids. We then consider some of the problems that these birds face and review the evidence for the cognitive abilities they use to solve them. Thus, the emergence of episodic-like memory is viewed as a solution for coping with food perishability, while the various cache-protection and pilfering strategies may be sophisticated tools to deprive competitors of information, either by reducing the quality of information they can gather, or invalidating the information they already have. Finally, we shall examine whether such future-oriented behaviour involves future planning and ask why this and other cognitive abilities might have evolved in corvids. PMID:20156820

  17. Problems faced by food-caching corvids and the evolution of cognitive solutions.

    PubMed

    Grodzinski, Uri; Clayton, Nicola S

    2010-03-27

    The scatter hoarding of food, or caching, is a widespread and well-studied behaviour. Recent experiments with caching corvids have provided evidence for episodic-like memory, future planning and possibly mental attribution, all cognitive abilities that were thought to be unique to humans. In addition to the complexity of making flexible, informed decisions about caching and recovering, this behaviour is underpinned by a motivationally controlled compulsion to cache. In this review, we shall first discuss the compulsive side of caching both during ontogeny and in the caching behaviour of adult corvids. We then consider some of the problems that these birds face and review the evidence for the cognitive abilities they use to solve them. Thus, the emergence of episodic-like memory is viewed as a solution for coping with food perishability, while the various cache-protection and pilfering strategies may be sophisticated tools to deprive competitors of information, either by reducing the quality of information they can gather, or invalidating the information they already have. Finally, we shall examine whether such future-oriented behaviour involves future planning and ask why this and other cognitive abilities might have evolved in corvids.

  18. High Performance Analytics with the R3-Cache

    NASA Astrophysics Data System (ADS)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  19. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    NASA Astrophysics Data System (ADS)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  20. Cache directory lookup reader set encoding for partial cache line speculation support

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  1. MELOC - Memory and Location Optimized Caching for Mobile Ad Hoc Networks

    DTIC Science & Technology

    2011-01-01

    required for such environments. Moreover, nodes located at centre have to be chosen as cache location, since it reduces the chance of being attacked...Figure 1.1. MANET Formed by Armed Forces 47 Example 3: Sharing of music and videos are famous among mobile users. Instead of downloading...The two tier caching scheme discussed in this paper is acoustic . The characteristics of two-tier caching are as follows, the content of data to be

  2. Corvid Re-Caching without ‘Theory of Mind’: A Model

    PubMed Central

    van der Vaart, Elske; Verbrugge, Rineke; Hemelrijk, Charlotte K.

    2012-01-01

    Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a ‘theory of mind’; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of ‘virtual bird’, whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the ‘virtual bird’ acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically. PMID:22396799

  3. Cache-Cache Comparison for Supporting Meaningful Learning

    ERIC Educational Resources Information Center

    Wang, Jingyun; Fujino, Seiji

    2015-01-01

    The paper presents a meaningful discovery learning environment called "cache-cache comparison" for a personalized learning support system. The processing of seeking hidden relations or concepts in "cache-cache comparison" is intended to encourage learners to actively locate new knowledge in their knowledge framework and check…

  4. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  5. Simplifying and speeding the management of intra-node cache coherence

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  6. Interacting Cache memories: evidence for flexible memory use by Western Scrub-Jays (Aphelocoma californica).

    PubMed

    Clayton, Nicola S; Yu, Kara Shirley; Dickinson, Anthony

    2003-01-01

    When Western Scrub-Jays (Aphelocoma californica) cached and recovered perishable crickets, N. S. Clayton, K. S. Yu, and A. Dickinson (2001) reported that the jays rapidly learned to search for fresh crickets after a 1-day retention interval (RI) between caching and recovery but to avoid searching for perished crickets after a 4-day RI. In the present experiments, the jays generalized their search preference for crickets to intermediate RIs and used novel information about the rate of decay of crickets presented during the RI to reverse these search preferences at recovery. The authors interpret this reversal as evidence that the birds can integrate information about the caching episode with new information presented during the RI.

  7. Software-Controlled Caches in the VMP Multiprocessor

    DTIC Science & Technology

    1986-03-01

    programming system level that Processors is tuned for the VMP design. In this vein, we are interested in exploring how far the software support can go to ...handled in software, analogously to the handling agement of the shared program state is familiar and of virtual memory page faults. Hardware support for...ensure good behavior, as opposed to how Each cache miss results in bus traffic. Table 2 pro- vides the bus cost for the "average" cache miss. Fig

  8. Distributed Name Servers: Naming and Caching in Large Distributed Computing Environments

    DTIC Science & Technology

    1985-12-01

    transmission rate of the communication medium1, transmission over a 56K bps line costs approx- imately 54r, and similarly, communication over a 9.6K...memories for modem computer systems attempt to maximize the hit ratio for a fixed-size cache by utilizing intelligent cache replacement algorithms

  9. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    PubMed

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  10. Elements of episodic-like memory in animals.

    PubMed

    Clayton, N S; Griffiths, D P; Emery, N J; Dickinson, A

    2001-09-29

    A number of psychologists have suggested that episodic memory is a uniquely human phenomenon and, until recently, there was little evidence that animals could recall a unique past experience and respond appropriately. Experiments on food-caching memory in scrub jays question this assumption. On the basis of a single caching episode, scrub jays can remember when and where they cached a variety of foods that differ in the rate at which they degrade, in a way that is inexplicable by relative familiarity. They can update their memory of the contents of a cache depending on whether or not they have emptied the cache site, and can also remember where another bird has hidden caches, suggesting that they encode rich representations of the caching event. They make temporal generalizations about when perishable items should degrade and also remember the relative time since caching when the same food is cached in distinct sites at different times. These results show that jays form integrated memories for the location, content and time of caching. This memory capability fulfils Tulving's behavioural criteria for episodic memory and is thus termed 'episodic-like'. We suggest that several features of episodic memory may not be unique to humans.

  11. Analysis of power gating in different hierarchical levels of 2MB cache, considering variation

    NASA Astrophysics Data System (ADS)

    Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza

    2015-09-01

    This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.

  12. Solutions and debugging for data consistency in multiprocessors with noncoherent caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.

    1995-02-01

    We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less

  13. Improving energy efficiency of Embedded DRAM Caches for High-end Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong

    2014-01-01

    With increasing system core-count, the size of last level cache (LLC) has increased and since SRAM consumes high leakage power, power consumption of LLCs is becoming a significant fraction of processor power consumption. To address this, researchers have used embedded DRAM (eDRAM) LLCs which consume low-leakage power. However, eDRAM caches consume a significant amount of energy in the form of refresh energy. In this paper, we propose ESTEEM, an energy saving technique for embedded DRAM caches. ESTEEM uses dynamic cache reconfiguration to turn-off a portion of the cache to save both leakage and refresh energy. It logically divides the cachemore » sets into multiple modules and turns-off possibly different number of ways in each module. Microarchitectural simulations confirm that ESTEEM is effective in improving performance and energy efficiency and provides better results compared to a recently-proposed eDRAM cache energy saving technique, namely Refrint. For single and dual-core simulations, the average saving in memory subsystem (LLC+main memory) on using ESTEEM is 25.8% and 32.6%, respectively and average weighted speedup are 1.09X and 1.22X, respectively. Additional experiments confirm that ESTEEM works well for a wide-range of system parameters.« less

  14. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE PAGES

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.; ...

    2017-09-20

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  15. A performance study of the time-varying cache behavior: a study on APEX, Mantevo, NAS, and PARSEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddique, Nafiul A.; Grubel, Patricia A.; Badawy, Abdel-Hameed A.

    Cache has long been used to minimize the latency of main memory accesses by storing frequently used data near the processor. Processor performance depends on the underlying cache performance. Therefore, significant research has been done to identify the most crucial metrics of cache performance. Although the majority of research focuses on measuring cache hit rates and data movement as the primary cache performance metrics, cache utilization is significantly important. We investigate the application’s locality using cache utilization metrics. In addition, we present cache utilization and traditional cache performance metrics as the program progresses providing detailed insights into the dynamic applicationmore » behavior on parallel applications from four benchmark suites running on multiple cores. We explore cache utilization for APEX, Mantevo, NAS, and PARSEC, mostly scientific benchmark suites. Our results indicate that 40% of the data bytes in a cache line are accessed at least once before line eviction. Also, on average a byte is accessed two times before the cache line is evicted for these applications. Moreover, we present runtime cache utilization, as well as, conventional performance metrics that illustrate a holistic understanding of cache behavior. To facilitate this research, we build a memory simulator incorporated into the Structural Simulation Toolkit (Rodrigues et al. in SIGMETRICS Perform Eval Rev 38(4):37–42, 2011). Finally, our results suggest that variable cache line size can result in better performance and can also conserve power.« less

  16. Conditional load and store in a shared memory

    DOEpatents

    Blumrich, Matthias A; Ohmacht, Martin

    2015-02-03

    A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.

  17. Retrospective Cognition by Food-Caching Western Scrub-Jays

    ERIC Educational Resources Information Center

    de Kort, S.R.; Dickinson, A.; Clayton, N.S.

    2005-01-01

    Episodic-like memory, the retrospective component of cognitive time travel in animals, needs to fulfil three criteria to meet the behavioral properties of episodic memory as defined for humans. Here, we review results obtained with the cache-recovery paradigm with western scrub-jays and conclude that they fulfil these three criteria. The jays…

  18. Consolidation and reconsolidation of memory in black-capped chickadees (Poecile atricapillus).

    PubMed

    Barrett, Matthew C; Sherry, David F

    2012-12-01

    Multiple phases of protein synthesis are necessary for the synaptic modifications that consolidate long-term memory. The reconsolidation hypothesis supposes that information in long-term memory becomes labile and subject to change when retrieved and must be reconsolidated into long-term memory. The current study used the protein synthesis inhibitor anisomycin to examine memory consolidation in birds and to test the reconsolidation hypothesis. Black-capped chickadees store food and usually remember which of their caches they have emptied and which they have left full. In Experiment 1, anisomycin was injected either immediately and 2 hr after food caching, or 4 and 6 hr after food caching. Inhibition of protein synthesis impaired memory for cache sites 24 and 48 hr later. In Experiment 2, it was hypothesized that long-term memory for food caches becomes labile as predicted by the reconsolidation hypothesis when birds search for caches. Anisomycin was administered immediately after chickadees had searched for their caches. Inhibition of protein synthesis should disrupt memory for caches left full if these sites are retrieved from long-term memory and require reconsolidation. Control birds were later more likely to revisit full caches than caches they had emptied. Birds given anisomycin revisited both kinds of caches and did not distinguish between them. This result shows that reconsolidation of full caches into long-term memory is not necessary following search for cache sites, but also shows that protein synthesis-dependent consolidation is required for updating the status of emptied caches.

  19. AYUSH: A Technique for Extending Lifetime of SRAM-NVM Hybrid Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S

    2014-01-01

    Recently, researchers have explored way-based hybrid SRAM-NVM (non-volatile memory) last level caches (LLCs) to bring the best of SRAM and NVM together. However, the limited write endurance of NVMs restricts the lifetime of these hybrid caches. We present AYUSH, a technique to enhance the lifetime of hybrid caches, which works by using data-migration to preferentially use SRAM for storing frequently-reused data. Microarchitectural simulations confirm that AYUSH achieves larger improvement in lifetime than a previous technique and also maintains performance and energy efficiency. For single, dual and quad-core workloads, the average increase in cache lifetime with AYUSH is 6.90X, 24.06X andmore » 47.62X, respectively.« less

  20. Re-caching by Western scrub-jays (Aphelocoma californica) cannot be attributed to stress.

    PubMed

    Thom, James M; Clayton, Nicola S

    2013-01-01

    Western scrub-jays (Aphelocoma californica) live double lives, storing food for the future while raiding the stores of other birds. One tactic scrub-jays employ to protect stores is "re-caching"-relocating caches out of sight of would-be thieves. Recent computational modelling work suggests that re-caching might be mediated not by complex cognition, but by a combination of memory failure and stress. The "Stress Model" asserts that re-caching is a manifestation of a general drive to cache, rather than a desire to protect existing stores. Here, we present evidence strongly contradicting the central assumption of these models: that stress drives caching, irrespective of social context. In Experiment (i), we replicate the finding that scrub-jays preferentially relocate food they were watched hiding. In Experiment (ii) we find no evidence that stress increases caching. In light of our results, we argue that the Stress Model cannot account for scrub-jay re-caching.

  1. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    PubMed

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  2. Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems

    PubMed Central

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133

  3. A set-associative, fault-tolerant cache design

    NASA Technical Reports Server (NTRS)

    Lamet, Dan; Frenzel, James F.

    1992-01-01

    The design of a defect-tolerant control circuit for a set-associative cache memory is presented. The circuit maintains the stack ordering necessary for implementing the Least Recently Used (LRU) replacement algorithm. A discussion of programming techniques for bypassing defective blocks is included.

  4. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    NASA Astrophysics Data System (ADS)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  5. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  6. Is random access memory random?

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.

  7. Automated Cache Performance Analysis And Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohror, Kathryn

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool tomore » gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS

  8. Massively parallel algorithms for trace-driven cache simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  9. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  10. The ontogeny of food-caching behaviour in New Zealand robins (Petroica longipes).

    PubMed

    Clark, Lisabertha L; Shaw, Rachael C

    2018-06-01

    Hoarding or caching behaviour is a widely-used paradigm for examining a range of cognitive processes in birds, such as social cognition and spatial memory. However, much is still unknown about how caching develops in young birds, especially in the wild. Studying the ontogeny of caching in the wild will help researchers to identify the mechanisms that shape this advantageous foraging strategy. We examined the ontogeny of food caching behaviour in a wild New Zealand passerine, the North Island robin (Petroica longipes). For 12-weeks following fledging, we observed 34 juveniles to examine the development of caching and cache retrieval. Additionally, we compared the caching behaviour of juveniles at 12 weeks post-fledging to 35 adult robins to determine whether juveniles had developed adult-like caching behaviour by this age. Juveniles began caching mealworms shortly after achieving foraging independency. Multivariate analyses revealed that caching rate increased and handling time decreased with increasing age. Juveniles spontaneously began retrieving caches as soon as they had begun to cache and their retrieval rates then remained constant throughout their ensuing development. Likewise, the number of sites used by juveniles did not change with age. Juvenile sex, caregiver sex and the duration of post-fledging parental care did not influence the development of caching, cache retrieval, the number of cache sites used and the time juveniles spent handling mealworms. At 12 weeks post-fledging, juveniles demonstrated levels of caching, cache retrieval and cache site usage that were comparable to adults. However, juvenile prey handling time was still longer than adults. The spontaneous emergence of cache retrieval and the consistency in the number of cache sites used throughout development suggests that these aspects of caching in North Island robins are likely to be innate, but that age and experience have an important role in the development of adult caching behaviours

  11. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  12. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  13. Multicast for savings in cache-based video distribution

    NASA Astrophysics Data System (ADS)

    Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf

    1999-12-01

    Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.

  14. Evidence against observational spatial memory for cache locations of conspecifics in marsh tits Poecile palustris.

    PubMed

    Urhan, A Utku; Emilsson, Ellen; Brodin, Anders

    2017-01-01

    Many species in the family Paridae, such as marsh tits Poecile palustris , are large-scale scatter hoarders of food that make cryptic caches and disperse these in large year-round territories. The perhaps most well-known species in the family, the great tit Parus major , does not store food itself but is skilled in stealing caches from the other species. We have previously demonstrated that great tits are able to memorise positions of caches they have observed marsh tits make and later return and steal the food. As great tits are explorative in nature and unusually good learners, it is possible that such "memorisation of caches from a distance" is a unique ability of theirs. The other possibility is that this ability is general in the parid family. Here, we tested marsh tits in the same experimental set-up as where we previously have tested great tits. We allowed caged marsh tits to observe a caching conspecific in a specially designed indoor arena. After a retention interval of 1 or 24 h, we allowed the observer to enter the arena and search for the caches. The marsh tits showed no evidence of such observational memorization ability, and we believe that such ability is more useful for a non-hoarding species. Why should a marsh tit that memorises hundreds of their own caches in the field bother with the difficult task of memorising other individuals' caches? We argue that the close-up memorisation procedure that marsh tits use at their own caches may be a different type of observational learning than memorisation of caches made by others. For example, the latter must be done from a distance and hence may require the ability to adopt an allocentric perspective, i.e. the ability to visualise the cache from the hoarder's perspective. Members of the Paridae family are known to possess foraging techniques that are cognitively advanced. Previously, we have demonstrated that a non-hoarding parid species, the great tit P. major , is able to memorise positions of caches

  15. Magpies can use local cues to retrieve their food caches.

    PubMed

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  16. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  17. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  18. Architectural Techniques For Managing Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    As chip power dissipation becomes a critical challenge in scaling processor performance, computer architects are forced to fundamentally rethink the design of modern processors and hence, the chip-design industry is now at a major inflection point in its hardware roadmap. The high leakage power and low density of SRAM poses serious obstacles in its use for designing large on-chip caches and for this reason, researchers are exploring non-volatile memory (NVM) devices, such as spin torque transfer RAM, phase change RAM and resistive RAM. However, since NVMs are not strictly superior to SRAM, effective architectural techniques are required for making themmore » a universal memory solution. This book discusses techniques for designing processor caches using NVM devices. It presents algorithms and architectures for improving their energy efficiency, performance and lifetime. It also provides both qualitative and quantitative evaluation to help the reader gain insights and motivate them to explore further. This book will be highly useful for beginners as well as veterans in computer architecture, chip designers, product managers and technical marketing professionals.« less

  19. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item withinmore » a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.« less

  20. Caching Servers for ATLAS

    NASA Astrophysics Data System (ADS)

    Gardner, R. W.; Hanushevsky, A.; Vukotic, I.; Yang, W.

    2017-10-01

    As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  1. The Effects of Cache Modification on Food Caching and Retrieval Behavior by Rats

    ERIC Educational Resources Information Center

    McKenzie, T.L.B.; Bird, L.R.; Roberts, W.A.

    2005-01-01

    Rats cached pieces of cheese on four different arms of an eight-arm radial maze. On a retrieval test given 45min later, rats learned to return to arms where food was cached before arms where food had not been cached. Tests were then performed in which cache sites on one side of the maze were always modified (pilfered or degraded), but cache sites…

  2. Single-pass memory system evaluation for multiprogramming workloads

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1990-01-01

    Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.

  3. Power reduction by power gating in differential pair type spin-transfer-torque magnetic random access memories for low-power nonvolatile cache memories

    NASA Astrophysics Data System (ADS)

    Ohsawa, Takashi; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo

    2014-01-01

    Array operation currents in spin-transfer-torque magnetic random access memories (STT-MRAMs) that use four differential pair type magnetic tunnel junction (MTJ)-based memory cells (4T2MTJ, two 6T2MTJs and 8T2MTJ) are simulated and compared with that in SRAM. With L3 cache applications in mind, it is assumed that the memories are composed of 32 Mbyte capacity to be accessed in 64 byte in parallel. All the STT-MRAMs except for the 8T2MTJ one are designed with 32 bit fine-grained power gating scheme applied to eliminate static currents in the memory cells that are not accessed. The 8T2MTJ STT-MRAM, the cell’s design concept being not suitable for the fine-grained power gating, loads and saves 32 Mbyte data in 64 Mbyte unit per 1 Mbit sub-array in 2 × 103 cycles. It is shown that the array operation current of the 4T2MTJ STT-MRAM is 70 mA averaged in 15 ns write cycles at Vdd = 0.9 V. This is the smallest among the STT-MRAMs, about the half of the low standby power (LSTP) SRAM whose array operation current is totally dominated by the cells’ subthreshold leakage.

  4. The Effects of Block Size on the Performance of Coherent Caches in Shared-Memory Multiprocessors

    DTIC Science & Technology

    1993-05-01

    increase with the bandwidth and latency. For those applications with poor spatial locality, the best choice of cache line size is determined by the...observation was used in the design of two schemes: LimitLESS di- rectories and Tag caches. LimitLESS directories [15] were designed for the ALEWIFE...small packets may be used to avoid network congestion. The most important factor influencing the choice of cache line size for a multipro- cessor is the

  5. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  6. Checkpointing in speculative versioning caches

    DOEpatents

    Eichenberger, Alexandre E; Gara, Alan; Gschwind, Michael K; Ohmacht, Martin

    2013-08-27

    Mechanisms for generating checkpoints in a speculative versioning cache of a data processing system are provided. The mechanisms execute code within the data processing system, wherein the code accesses cache lines in the speculative versioning cache. The mechanisms further determine whether a first condition occurs indicating a need to generate a checkpoint in the speculative versioning cache. The checkpoint is a speculative cache line which is made non-speculative in response to a second condition occurring that requires a roll-back of changes to a cache line corresponding to the speculative cache line. The mechanisms also generate the checkpoint in the speculative versioning cache in response to a determination that the first condition has occurred.

  7. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    NASA Astrophysics Data System (ADS)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  8. Turtle: identifying frequent k-mers with cache-efficient algorithms.

    PubMed

    Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander

    2014-07-15

    Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. What makes specialized food-caching mountain chickadees successful city slickers?

    PubMed

    Kozlovsky, Dovid Y; Weissgerber, Emily A; Pravosudov, Vladimir V

    2017-05-31

    Anthropogenic environments are a dominant feature of the modern world; therefore, understanding which traits allow animals to succeed in these urban environments is especially important. Overall, generalist species are thought to be most successful in urban environments, with better general cognition and less neophobia as suggested critical traits. It is less clear, however, which traits would be favoured in urban environments in highly specialized species. Here, we compared highly specialized food-caching mountain chickadees living in an urban environment (Reno, NV, USA) with those living in their natural environment to investigate what makes this species successful in the city. Using a 'common garden' paradigm, we found that urban mountain chickadees tended to explore a novel environment faster and moved more frequently, were better at novel problem-solving, had better long-term spatial memory retention and had a larger telencephalon volume compared with forest chickadees. There were no significant differences between urban and forest chickadees in neophobia, food-caching rates, spatial memory acquisition, hippocampus volume, or the total number of hippocampal neurons. Our results partially support the idea that some traits associated with behavioural flexibility and innovation are associated with successful establishment in urban environments, but differences in long-term spatial memory retention suggest that even this trait specialized for food-caching may be advantageous. Our results highlight the importance of environmental context, species biology, and temporal aspects of invasion in understanding how urban environments are associated with behavioural and cognitive phenotypes and suggest that there is likely no one suite of traits that makes urban animals successful. © 2017 The Author(s).

  10. The Science of Computing: Virtual Memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1986-01-01

    In the March-April issue, I described how a computer's storage system is organized as a hierarchy consisting of cache, main memory, and secondary memory (e.g., disk). The cache and main memory form a subsystem that functions like main memory but attains speeds approaching cache. What happens if a program and its data are too large for the main memory? This is not a frivolous question. Every generation of computer users has been frustrated by insufficient memory. A new line of computers may have sufficient storage for the computations of its predecessor, but new programs will soon exhaust its capacity. In 1960, a longrange planning committee at MIT dared to dream of a computer with 1 million words of main memory. In 1985, the Cray-2 was delivered with 256 million words. Computational physicists dream of computers with 1 billion words. Computer architects have done an outstanding job of enlarging main memories yet they have never kept up with demand. Only the shortsighted believe they can.

  11. Memory for Multiple Cache Locations and Prey Quantities in a Food-Hoarding Songbird

    PubMed Central

    Armstrong, Nicola; Garland, Alexis; Burns, K. C.

    2012-01-01

    Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes), a food-hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with (1) the number of cache sites containing prey rewards and (2) the length of time separating cache creation and retrieval (retention interval). Results showed that subjects generally performed above-chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3, and 4 cache sites from between 1, 10, and 60 s. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to 1 min long retention intervals without training. PMID:23293622

  12. Memory for multiple cache locations and prey quantities in a food-hoarding songbird.

    PubMed

    Armstrong, Nicola; Garland, Alexis; Burns, K C

    2012-01-01

    Most animals can discriminate between pairs of numbers that are each less than four without training. However, North Island robins (Petroica longipes), a food-hoarding songbird endemic to New Zealand, can discriminate between quantities of items as high as eight without training. Here we investigate whether robins are capable of other complex quantity discrimination tasks. We test whether their ability to discriminate between small quantities declines with (1) the number of cache sites containing prey rewards and (2) the length of time separating cache creation and retrieval (retention interval). Results showed that subjects generally performed above-chance expectations. They were equally able to discriminate between different combinations of prey quantities that were hidden from view in 2, 3, and 4 cache sites from between 1, 10, and 60 s. Overall results indicate that North Island robins can process complex quantity information involving more than two discrete quantities of items for up to 1 min long retention intervals without training.

  13. Joshua tree (Yucca brevifolia) seeds are dispersed by seed-caching rodents

    USGS Publications Warehouse

    Vander Wall, S.B.; Esque, T.; Haines, D.; Garnett, M.; Waitman, B.A.

    2006-01-01

    Joshua tree (Yucca brevifolia) is a distinctive and charismatic plant of the Mojave Desert. Although floral biology and seed production of Joshua tree and other yuccas are well understood, the fate of Joshua tree seeds has never been studied. We tested the hypothesis that Joshua tree seeds are dispersed by seed-caching rodents. We radioactively labelled Joshua tree seeds and followed their fates at five source plants in Potosi Wash, Clark County, Nevada, USA. Rodents made a mean of 30.6 caches, usually within 30 m of the base of source plants. Caches contained a mean of 5.2 seeds buried 3-30 nun deep. A variety of rodent species appears to have prepared the caches. Three of the 836 Joshua tree seeds (0.4%) cached germinated the following spring. Seed germination using rodent exclosures was nearly 15%. More than 82% of seeds in open plots were removed by granivores, and neither microsite nor supplemental water significantly affected germination. Joshua tree produces seeds in indehiscent pods or capsules, which rodents dismantle to harvest seeds. Because there is no other known means of seed dispersal, it is possible that the Joshua tree-rodent seed dispersal interaction is an obligate mutualism for the plant.

  14. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  15. EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less

  16. EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-01-29

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less

  17. Performance of hashed cache data migration schemes on multicomputers

    NASA Technical Reports Server (NTRS)

    Hiranandani, Seema; Saltz, Joel; Mehrotra, Piyush; Berryman, Harry

    1991-01-01

    After conducting an examination of several data-migration mechanisms which permit an explicit and controlled mapping of data to memory, a set of schemes for storage and retrieval of off-processor array elements is experimentally evaluated and modeled. All schemes considered have their basis in the use of hash tables for efficient access of nonlocal data. The techniques in question are those of hashed cache, partial enumeration, and full enumeration; in these, nonlocal data are stored in hash tables, so that the operative difference lies in the amount of memory used by each scheme and in the retrieval mechanism used for nonlocal data.

  18. Optimizing Maintenance of Constraint-Based Database Caches

    NASA Astrophysics Data System (ADS)

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  19. Memory and the hippocampus in food-storing birds: a comparative approach.

    PubMed

    Clayton, N S

    1998-01-01

    Comparative studies provide a unique source of evidence for the role of the hippocampus in learning and memory. Within birds and mammals, the hippocampal volume of scatter-hoarding species that cache food in many different locations is enlarged, relative to the remainder of the telencephalon, when compared with than that of species which cache food in one larder, or do not cache at all. Do food-storing species show enhanced memory function in association with the volumetric enlargement of the hippocampus? Comparative studies within the parids (titmice and chickadees) and corvids (jays, nutcrackers and magpies), two families of birds which show natural variation in food-storing behavior, suggest that there may be two kinds of memory specialization associated with scatter-hoarding. First, in terms of spatial memory, several scatter-hoarding species have a more accurate and enduring spatial memory, and a preference to rely more heavily upon spatial cues, than that of closely related species which store less food, or none at all. Second, some scatter-hoarding parids and corvids are also more resistant to memory interference. While the most critical component about a cache site may be its spatial location, there is mounting evidence that food-storing birds remember additional information about the contents and status of cache sites. What is the underlying neural mechanism by which the hippocampus learns and remembers cache sites? The current mammalian dogma is that the neural mechanisms of learning and memory are achieved primarily by variations in synaptic number and efficacy. Recent work on the concomitant development of food-storing, memory and the avian hippocampus illustrates that the avian hippocampus may swell or shrivel by as much as 30% in response to presence or absence of food-storing experience. Memory for food caches triggers a dramatic increase in the total number of number of neurons within the avian hippocampus by altering the rate at which these cells

  20. A novel cache mechanism

    NASA Technical Reports Server (NTRS)

    Gunawardena, J. A.

    1992-01-01

    This cache mechanism is transparent but does not contain associative circuits. It does not rely on locality of reference of instructions or data. No redundant instructions or data are encached. Items in the cache are accessed without address arithmetic. A cache miss is detected by the simplest test; compare two bits. These features would result in faster access, higher hit rate, reduced chip area, and less power dissipation in comparison with associative systems of similar size.

  1. Mapping virtual addresses to different physical addresses for value disambiguation for thread memory access requests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gala, Alan; Ohmacht, Martin

    A multiprocessor system includes nodes. Each node includes a data path that includes a core, a TLB, and a first level cache implementing disambiguation. The system also includes at least one second level cache and a main memory. For thread memory access requests, the core uses an address associated with an instruction format of the core. The first level cache uses an address format related to the size of the main memory plus an offset corresponding to hardware thread meta data. The second level cache uses a physical main memory address plus software thread meta data to store the memorymore » access request. The second level cache accesses the main memory using the physical address with neither the offset nor the thread meta data after resolving speculation. In short, this system includes mapping of a virtual address to a different physical addresses for value disambiguation for different threads.« less

  2. Population substructure in Cache County, Utah: the Cache County study

    PubMed Central

    2014-01-01

    Background Population stratification is a key concern for genetic association analyses. In addition, extreme homogeneity of ethnic origins of a population can make it difficult to interpret how genetic associations in that population may translate into other populations. Here we have evaluated the genetic substructure of samples from the Cache County study relative to the HapMap Reference populations and data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Results Our findings show that the Cache County study is similar in ethnic diversity to the self-reported "Whites" in the ADNI sample and less homogenous than the HapMap CEU population. Conclusions We conclude that the Cache County study is genetically representative of the general European American population in the USA and is an appropriate population for conducting broadly applicable genetic studies. PMID:25078123

  3. Dietary folate, vitamin B-12, vitamin B-6 and incident Alzheimer's disease: the cache county memory, health and aging study.

    PubMed

    Nelson, C; Wengreen, H J; Munger, R G; Corcoran, C D

    2009-12-01

    To examine associations between dietary and supplemental folate, vitamin B-12 and vitamin B-6 and incident Alzheimer's disease (AD) among elderly men and women. Data collected were from participants of the Cache County Memory, Health and Aging Study, a longitudinal study of 5092 men and women 65 years and older who were residents of Cache County, Utah in 1995. Multistage clinical assessment procedures were used to identify incident cases of AD. Dietary data were collected using a 142-item food frequency questionnaire. Cox Proportional Hazards (CPH) modeling was used to determine hazard ratios across quintiles of micronutrient intake. 202 participants were diagnosed with incident AD during follow-up (1995-2004). In multivariable CPH models that controlled for the effects of gender, age, education, and other covariates there were no observed differences in risk of AD or dementia by increasing quintiles of total intake of folate, vitamin B-12, or vitamin B-6. Similarly, there were no observed differences in risk of AD by regular use of either folate or B6 supplements. Dietary intake of B-vitamins from food and supplemental sources appears unrelated to incidence of dementia and AD. Further studies examining associations between dietary intakes of B-vitamins, biomarkers of B-vitamin status and cognitive endpoints are warranted.

  4. Cache coherency without line exclusivity in MP systems having store-in caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomerene, J.H.; Puzak, T.R.; Rechtschaffen, R.N.

    1983-11-01

    By modifying the function of the storage control unit, a multiprocessor (MP) system having store-in caches is enabled to operate with the same versatility as an MP system having store-through caches, thereby eliminating the requirement for line exclusivity and greatly reducing the occurrence of cross-interrogates.

  5. Identification of VaD and AD prodromes: the Cache County Study.

    PubMed

    Hayden, K M; Warren, L H; Pieper, C F; Østbye, T; Tschanz, J T; Norton, M C; Breitner, J C S; Welsh-Bohmer, K A

    2005-07-01

    It is unclear whether vascular dementia (VaD) has a cognitive prodrome, akin to the mild cognitive impairment (MCI) prodrome to Alzheimer's dementia (AD). To evaluate whether VaD has a cognitive prodrome, and if it can be differentiated from prodromal AD, we examined neuropsychological test performance of participants in a nested case-control study within a population-based cohort aged 65 or older. Participants (n = 485) were identified from the Cache County Study, a large population-based study of aging and dementia. After an average of 3 years of follow-up, a total of 62 incident dementia cases were identified (14 VaD, 48 AD). We identified a number of neuropsychological tests (executive and memory) that discriminated between diagnosed VaD and AD cases. Multivariate analyses sought to differentiate between these same groups 3 years before clinical diagnosis. The Consortium to Establish a Registry for Alzheimer's Disease Word List Recognition Test correct recognition of foils (mean difference, 1.25; 95% confidence interval [CI], 0.42 to 2.07; p < 0.01), Logical Memory I (mean difference, 7.16; 95% CI, 0.78 to 13.55, p < 0.05), Logical Memory II delayed recall (mean difference, 8.67; 95% CI, 1.59 to 15.74, p < 0.05), and percent savings (mean difference, 51.07; 95% CI, 32.58 to 69.56, p < 0.0001) differentiated VaD from AD cases after adjustment for age, sex, education, and dementia severity. Three years before dementia diagnosis, word list recognition ("no" responses mean difference, 1.40; 95% CI, 0.64 to 2.17; p < 0.001, and "yes" responses mean difference, -1.14; 95% CI, -2.14 to -0.13; p < 0.03) discriminated between prodromal VaD and AD. These results suggest that VaD has a prodromal syndrome, the cognitive features of which are distinguishable from the cognitive prodrome of AD.

  6. Cache placement, pilfering, and a recovery advantage in a seed-dispersing rodent: Could predation of scatter hoarders contribute to seedling establishment?

    NASA Astrophysics Data System (ADS)

    Steele, Michael A.; Bugdal, Melissa; Yuan, Amy; Bartlow, Andrew; Buzalewski, Jarrod; Lichti, Nathan; Swihart, Robert

    2011-11-01

    Scatter-hoarding mammals are thought to rely on spatial memory to relocate food caches. Yet, we know little about how long these granivores (primarily rodents) recall specific cache locations or whether individual hoarders have an advantage when recovering their own caches. Indeed, a few recent studies suggest that high rates of pilferage are common and that individual hoarders may not have a retriever's advantage. We tested this hypothesis in a high-density (>7 animals/ha) population of eastern gray squirrels ( Sciurus carolinensis) by presenting individually marked animals (>20) with tagged acorns, mapping cache sites, and following the fate of seed caches. PIT tags allowed us to monitor individual seeds without disturbing cache sites. Acorns only remained in the caches for 12-119 h (0.5-5 d). However, when we live-trapped and removed some animals from the site immediately after they stored seeds (thus simulating predation), their seed caches remained intact for significantly longer periods (16-27 d). Cache duration corresponded roughly to the time at which squirrels were returned to the study area. These results suggest that squirrels have a retriever's advantage and may remember specific cache sites longer than previously thought. We further suggest that predation of scatter hoarders who store seeds for long periods and also possess a recovery advantage may be one important mechanism by which seed establishment is achieved.

  7. Changes in spatial memory mediated by experimental variation in food supply do not affect hippocampal anatomy in mountain chickadees (Poecile gambeli).

    PubMed

    Pravosudov, V V; Lavenex, P; Clayton, N S

    2002-05-01

    Earlier reports suggested that seasonal variation in food-caching behavior (caching intensity and cache retrieval accuracy) might correlate with morphological changes in the hippocampal formation, a brain structure thought to play a role in remembering cache locations. We demonstrated that changes in cache retrieval accuracy can also be triggered by experimental variation in food supply: captive mountain chickadees (Poecile gambeli) maintained on limited and unpredictable food supply were more accurate at recovering their caches and performed better on spatial memory tests than birds maintained on ad libitum food. In this study, we investigated whether these two treatment groups also differed in the volume and neuron number of the hippocampal formation. If variation in memory for food caches correlates with hippocampal size, then our birds with enhanced cache recovery and spatial memory performance should have larger hippocampal volumes and total neuron numbers. Contrary to this prediction we found no significant differences in volume or total neuron number of the hippocampal formation between the two treatment groups. Our results therefore indicate that changes in food-caching behavior and spatial memory performance, as mediated by experimental variations in food supply, are not necessarily accompanied by morphological changes in volume or neuron number of the hippocampal formation in fully developed, experienced food-caching birds. Copyright 2002 Wiley Periodicals, Inc.

  8. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  9. Use of diuretics is associated with reduced risk of Alzheimer's disease: the Cache County Study.

    PubMed

    Chuang, Yi-Fang; Breitner, John C S; Chiu, Yen-Ling; Khachaturian, Ara; Hayden, Kathleen; Corcoran, Chris; Tschanz, JoAnn; Norton, Maria; Munger, Ron; Welsh-Bohmer, Kathleen; Zandi, Peter P

    2014-11-01

    Although the use of antihypertensive medications has been associated with reduced risk of Alzheimer's disease (AD), it remains unclear which class provides the most benefit. The Cache County Study of Memory Health and Aging is a prospective longitudinal cohort study of dementing illnesses among the elderly population of Cache County, Utah. Using waves I to IV data of the Cache County Study, 3417 participants had a mean of 7.1 years of follow-up. Time-varying use of antihypertensive medications including different class of diuretics, angiotensin converting enzyme inhibitors, β-blockers, and calcium channel blockers was used to predict the incidence of AD using Cox proportional hazards analyses. During follow-up, 325 AD cases were ascertained with a total of 23,590 person-years. Use of any antihypertensive medication was associated with lower incidence of AD (adjusted hazard ratio [aHR], 0.77; 95% confidence interval [CI], 0.61-0.97). Among different classes of antihypertensive medications, thiazide (aHR, 0.7; 95% CI, 0.53-0.93), and potassium-sparing diuretics (aHR, 0.69; 95% CI, 0.48-0.99) were associated with the greatest reduction of AD risk. Thiazide and potassium-sparing diuretics were associated with decreased risk of AD. The inverse association of potassium-sparing diuretics confirms an earlier finding in this cohort, now with longer follow-up, and merits further investigation. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. An area model for on-chip memories and its application

    NASA Technical Reports Server (NTRS)

    Mulder, Johannes M.; Quach, Nhon T.; Flynn, Michael J.

    1991-01-01

    An area model suitable for comparing data buffers of different organizations and arbitrary sizes is described. The area model considers the supplied bandwidth of a memory cell and includes such buffer overhead as control logic, driver logic, and tag storage. The model gave less than 10 percent error when verified against real caches and register files. It is shown that, comparing caches and register files in terms of area for the same storage capacity, caches generally occupy more area per bit than register files for small caches because the overhead dominates the cache area at these sizes. For larger caches, the smaller storage cells in the cache provide a smaller total cache area per bit than the register set. Studying cache performance (traffic ratio) as a function of area, it is shown that, for small caches, direct-mapped caches perform significantly better than four-way set-associative caches and, for caches of medium areas, both direct-mapped and set-associative caches perform better than fully associative caches.

  11. Cache Energy Optimization Techniques For Modern Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In thismore » book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  12. Multiple core computer processor with globally-accessible local memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalf, John; Donofrio, David; Oliker, Leonid

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality ofmore » processor cores.« less

  13. Adaptive Caching Concept

    NASA Image and Video Library

    2015-06-10

    This diagram, superimposed on a photo of Martian landscape, illustrates a concept called "adaptive caching," which is in development for NASA's 2020 Mars rover mission. In addition to the investigations that the Mars 2020 rover will conduct on Mars, the rover will collect carefully selected samples of Mars rock and soil and cache them to be available for possible return to Earth if a Mars sample-return mission is scheduled and flown. Each sample will be stored in a sealed tube. Adaptive caching would result in a set of samples, up to the maximum number of tubes carried on the rover, being placed on the surface at the discretion of the mission operators. The tubes holding the collected samples would not go into a surrounding container. In this illustration, green dots indicate "regions of interest," where samples might be collected. The green diamond indicates one region of interest serving as the depot for the cache. The green X at upper right represents the landing site. The solid black line indicates the rover's route during its prime mission, and the dashed black line indicates its route during an extension of the mission. The base image is a portion of the "Everest Panorama" taken by the panoramic camera on NASA's Mars Exploration Rover Spirit at the top of Husband Hill in 2005. http://photojournal.jpl.nasa.gov/catalog/PIA19150

  14. Use of diuretics is associated with reduced risk of Alzheimer’s disease: the Cache County Study

    PubMed Central

    Chuang, Yi-Fang; Breitner, John C.S.; Chiu, Yen-Ling; Khachaturian, Ara; Hayden, Kathleen; Corcoran, Chris; Tschanz, JoAnn; Norton, Maria; Munger, Ron; Welsh-Bohmer, Kathleen; Zandi, Peter P.

    2015-01-01

    Although the use of antihypertensive medications has been associated with reduced risk of Alzheimer’s disease (AD), it remains unclear which class provides the most benefit. The Cache County Study of Memory Health and Aging is a prospective longitudinal cohort study of dementing illnesses among the elderly population of Cache County, Utah. Using waves I to IV data of the Cache County Study, 3417 participants had a mean of 7.1 years of follow-up. Time-varying use of antihypertensive medications including different class of diuretics, angiotensin converting enzyme inhibitors, β-blockers, and calcium channel blockers was used to predict the incidence of AD using Cox proportional hazards analyses. During follow-up, 325 AD cases were ascertained with a total of 23,590 person-years. Use of any anti-hypertensive medication was associated with lower incidence of AD (adjusted hazard ratio [aHR], 0.77; 95% confidence interval [CI], 0.61–0.97). Among different classes of antihypertensive medications, thiazide (aHR, 0.7; 95% CI, 0.53–0.93), and potassium-sparing diuretics (aHR, 0.69; 95% CI, 0.48–0.99) were associated with the greatest reduction of AD risk. Thiazide and potassium-sparing diuretics were associated with decreased risk of AD. The inverse association of potassium-sparing diuretics confirms an earlier finding in this cohort, now with longer follow-up, and merits further investigation. PMID:24910391

  15. A Distributed Cache Update Deployment Strategy in CDN

    NASA Astrophysics Data System (ADS)

    E, Xinhua; Zhu, Binjie

    2018-04-01

    The CDN management system distributes content objects to the edge of the internet to achieve the user's near access. Cache strategy is an important problem in network content distribution. A cache strategy was designed in which the content effective diffusion in the cache group, so more content was storage in the cache, and it improved the group hit rate.

  16. Cache-enabled small cell networks: modeling and tradeoffs.

    PubMed

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  17. An Effective Cache Algorithm for Heterogeneous Storage Systems

    PubMed Central

    Li, Yong; Feng, Dan

    2013-01-01

    Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms. PMID:24453890

  18. DIETARY FOLATE, VITAMIN B-12, VITAMIN B-6 AND INCIDENT ALZHEIMER’S DISEASE: THE CACHE COUNTY MEMORY, HEALTH, AND AGING STUDY

    PubMed Central

    NELSON, C.; WENGREEN, H.J.; MUNGER, R.G.; CORCORAN, C.D.

    2013-01-01

    Objective To examine associations between dietary and supplemental folate, vitamin B-12 and vitamin B-6 and incident Alzheimer’s disease (AD) among elderly men and women. Design, Setting and Participants Data collected were from participants of the Cache County Memory, Health and Aging Study, a longitudinal study of 5092 men and women 65 years and older who were residents of Cache County, Utah in 1995. Measurements Multistage clinical assessment procedures were used to identify incident cases of AD. Dietary data were collected using a 142-item food frequency questionnaire. Cox Proportional Hazards (CPH) modeling was used to determine hazard ratios across quintiles of micronutrient intake. Results 202 participants were diagnosed with incident AD during follow-up (1995–2004). In multivariable CPH models that controlled for the effects of gender, age, education, and other covariates there were no observed differences in risk of AD or dementia by increasing quintiles of total intake of folate, vitamin B-12, or vitamin B-6. Similarly, there were no observed differences in risk of AD by regular use of either folate or B6 supplements. Conclusion Dietary intake of B-vitamins from food and supplemental sources appears unrelated to incidence of dementia and AD. Further studies examining associations between dietary intakes of B-vitamins, biomarkers of B-vitamin status and cognitive endpoints are warranted. PMID:19924351

  19. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  20. Analysis of DNS Cache Effects on Query Distribution

    PubMed Central

    2013-01-01

    This paper studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server. We first filter out the malformed DNS queries to purify the log data pollution according to six categories. A model for DNS resolution, more specifically DNS caching, is presented. We demonstrate the presence and magnitude of DNS cache effects and the cache sharing effects on the request distribution through analytic model and simulation. CN TLD log data results are provided and analyzed based on the cache model. The approximate TTL distribution for domain name is inferred quantificationally. PMID:24396313

  1. Analysis of DNS cache effects on query distribution.

    PubMed

    Wang, Zheng

    2013-01-01

    This paper studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server. We first filter out the malformed DNS queries to purify the log data pollution according to six categories. A model for DNS resolution, more specifically DNS caching, is presented. We demonstrate the presence and magnitude of DNS cache effects and the cache sharing effects on the request distribution through analytic model and simulation. CN TLD log data results are provided and analyzed based on the cache model. The approximate TTL distribution for domain name is inferred quantificationally.

  2. Way-Scaling to Reduce Power of Cache with Delay Variation

    NASA Astrophysics Data System (ADS)

    Goudarzi, Maziar; Matsumura, Tadayuki; Ishihara, Tohru

    The share of leakage in cache power consumption increases with technology scaling. Choosing a higher threshold voltage (Vth) and/or gate-oxide thickness (Tox) for cache transistors improves leakage, but impacts cell delay. We show that due to uncorrelated random within-die delay variation, only some (not all) of cells actually violate the cache delay after the above change. We propose to add a spare cache way to replace delay-violating cache-lines separately in each cache-set. By SPICE and gate-level simulations in a commercial 90nm process, we show that choosing higher Vth, Tox and adding one spare way to a 4-way 16KB cache reduces leakage power by 42%, which depending on the share of leakage in total cache power, gives up to 22.59% and 41.37% reduction of total energy respectively in L1 instruction- and L2 unified-cache with a negligible delay penalty, but without sacrificing cache capacity or timing-yield.

  3. Clark's nutcracker spatial memory: the importance of large, structural cues.

    PubMed

    Bednekoff, Peter A; Balda, Russell P

    2014-02-01

    Clark's nutcrackers, Nucifraga columbiana, cache and recover stored seeds in high alpine areas including areas where snowfall, wind, and rockslides may frequently obscure or alter cues near the cache site. Previous work in the laboratory has established that Clark's nutcrackers use spatial memory to relocate cached food. Following from aspects of this work, we performed experiments to test the importance of large, structural cues for Clark's nutcracker spatial memory. Birds were no more accurate in recovering caches when more objects were on the floor of a large experimental room nor when this room was subdivided with a set of panels. However, nutcrackers were consistently less accurate in this large room than in a small experimental room. Clark's nutcrackers probably use structural features of experimental rooms as important landmarks during recovery of cached food. This use of large, extremely stable cues may reflect the imperfect reliability of smaller, closer cues in the natural habitat of Clark's nutcrackers. This article is part of a Special Issue entitled: CO3 2013. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Short-term observational spatial memory in Jackdaws (Corvus monedula) and Ravens (Corvus corax).

    PubMed

    Scheid, Christelle; Bugnyar, Thomas

    2008-10-01

    Observational spatial memory (OSM) refers to the ability of remembering food caches made by other individuals, enabling observers to find and pilfer the others' caches. Within birds, OSM has only been demonstrated in corvids, with more social species such as Mexican jays (Aphelocoma ultramarine) showing a higher accuracy of finding conspecific' caches than less social species such as Clark's nutcrackers (Nucifraga columbiana). However, socially dynamic corvids such as ravens (Corvus corax) are capable of sophisticated pilfering manoeuvres based on OSM. We here compared the performance of ravens and jackdaws (Corvus monedula) in a short-term OSM task. In contrast to ravens, jackdaws are socially cohesive but hardly cache and compete over food caches. Birds had to recover food pieces after watching a human experimenter hiding them in 2, 4 or 6 out of 10 possible locations. Results showed that for tests with two, four and six caches, ravens performed more accurately than expected by chance whereas jackdaws did not. Moreover, ravens made fewer re-visits to already inspected cache sites than jackdaws. These findings suggest that the development of observational spatial memory skills is linked with the species' reliance on food caches rather than with a social life style per se.

  5. Neuropsychological Performance in Advanced Age- Influences of Demographic Factors and Apolipoprotein E: Findings from the Cache County Memory Study

    PubMed Central

    Welsh-Bohmer, Katheen A.; Østbye, Truls; Sanders, Linda; Pieper, Carl F.; Hayden, Kathleen M.; Tschanz, JoAnn T.; Norton, Maria C.

    2009-01-01

    The Cache County Study of Memory in Aging (CCMS) is an epidemiological study of Alzheimer’s disease (AD), mild cognitive disorders, and aging in a population of exceptionally long-lived individuals (7th to 11th decade). Observation of population members without dementia provides an opportunity for establishing the range of normal neurocognitive performance in a representative sample of the very old. We examined neurocognitive performance of the normal participants undergoing full clinical evaluations (n=507) and we tested the potential modifying effects of APOE genotype, a known genetic risk factor for the later development of AD. The results indicate that advanced age and low education are related to lower test scores across nearly all of the neurocognitive measures. Gender and APOE ε4 both had negligible and inconsistent influences, affecting only isolated measures of memory and expressive speech (in case of gender). The gender and APOE effects disappeared once age and education were controlled. The study of this exceptionally long-lived population provides useful normative information regarding the broad range of “normal” cognition seen in advanced age. Among elderly without dementia or other cognitive impairment, APOE does not appear to exert any major effects on cognition once other demographic influences are controlled. PMID:18609337

  6. Neuropsychological performance in advanced age: influences of demographic factors and Apolipoprotein E: findings from the Cache County Memory Study.

    PubMed

    Welsh-Bohmer, Katheen A; Ostbye, Truls; Sanders, Linda; Pieper, Carl F; Hayden, Kathleen M; Tschanz, JoAnn T; Norton, Maria C

    2009-01-01

    The Cache County Study of Memory in Aging (CCMS) is an epidemiological study of Alzheimer's disease (AD), mild cognitive disorders, and aging in a population of exceptionally long-lived individuals (7th to 11th decade). Observation of population members without dementia provides an opportunity for establishing the range of normal neurocognitive performance in a representative sample of the very old. We examined neurocognitive performance of the normal participants undergoing full clinical evaluations (n = 507) and we tested the potential modifying effects of apolipoprotein E (APOE) genotype, a known genetic risk factor for the later development of AD. The results indicate that advanced age and low education are related to lower test scores across nearly all of the neurocognitive measures. Gender and APOE epsilon4 both had negligible and inconsistent influences, affecting only isolated measures of memory and expressive speech (in case of gender). The gender and APOE effects disappeared once age and education were controlled. The study of this exceptionally long-lived population provides useful normative information regarding the broad range of "normal" cognition seen in advanced age. Among elderly without dementia or other cognitive impairment, APOE does not appear to exert any major effects on cognition once other demographic influences are controlled.

  7. Turbidity and Total Suspended Solids on the Lower Cache River Watershed, AR.

    PubMed

    Rosado-Berrios, Carlos A; Bouldin, Jennifer L

    2016-06-01

    The Cache River Watershed (CRW) in Arkansas is part of one of the largest remaining bottomland hardwood forests in the US. Although wetlands are known to improve water quality, the Cache River is listed as impaired due to sedimentation and turbidity. This study measured turbidity and total suspended solids (TSS) in seven sites of the lower CRW; six sites were located on the Bayou DeView tributary of the Cache River. Turbidity and TSS levels ranged from 1.21 to 896 NTU, and 0.17 to 386.33 mg/L respectively and had an increasing trend over the 3-year study. However, a decreasing trend from upstream to downstream in the Bayou DeView tributary was noted. Sediment loading calculated from high precipitation events and mean TSS values indicate that contributions from the Cache River main channel was approximately 6.6 times greater than contributions from Bayou DeView. Land use surrounding this river channel affects water quality as wetlands provide a filter for sediments in the Bayou DeView channel.

  8. Cache Scheme Based on Pre-Fetch Operation in ICN

    PubMed Central

    Duan, Jie; Wang, Xiong; Xu, Shizhong; Liu, Yuanni; Xu, Chuan; Zhao, Guofeng

    2016-01-01

    Many recent researches focus on ICN (Information-Centric Network), in which named content becomes the first citizen instead of end-host. In ICN, Named content can be further divided into many small sized chunks, and chunk-based communication has merits over content-based communication. The universal in-network cache is one of the fundamental infrastructures for ICN. In this work, a chunk-level cache mechanism based on pre-fetch operation is proposed. The main idea is that, routers with cache store should pre-fetch and cache the next chunks which may be accessed in the near future according to received requests and cache policy for reducing the users’ perceived latency. Two pre-fetch driven modes are present to answer when and how to pre-fetch. The LRU (Least Recently Used) is employed for the cache replacement. Simulation results show that the average user perceived latency and hops can be decreased by employed this cache mechanism based on pre-fetch operation. Furthermore, we also demonstrate that the results are influenced by many factors, such as the cache capacity, Zipf parameters and pre-fetch window size. PMID:27362478

  9. An Analysis of Instruction-Cached SIMD Computer Architecture

    DTIC Science & Technology

    1993-12-01

    ASSEBLE SIMULATE SCHEDULE VERIFY :t og ... . .. ... V~JSRUCTONSFOR PECIIEDCOMPARE ASSEMBLEI SIMULATE Ift*U1II ~ ~ SCHEDULEIinw ;. & VERIFY...Cache to Place Blocks ................. 70 4.5.4 Step 4: Schedule Cache Blocks ............................. 70 4.5.5 Step 5: Store Cache Blocks...167 B.4 Scheduler .............................................. 167 B.4.1 Basic Block Definition

  10. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  11. A Refreshable, On-line Cache for HST Data Retrieval

    NASA Astrophysics Data System (ADS)

    Fraquelli, Dorothy A.; Ellis, Tracy A.; Ridgaway, Michael; DPAS Team

    2016-01-01

    We discuss upgrades to the HST Data Processing System, with an emphasis on the changes Hubble Space Telescope (HST) Archive users will experience. In particular, data are now held on-line (in a cache) removing the need to reprocess the data every time they are requested from the Archive. OTFR (on the fly reprocessing) has been replaced by a reprocessing system, which runs in the background. Data in the cache are automatically placed in the reprocessing queue when updated calibration reference files are received or when an improved calibration algorithm is installed. Data in the on-line cache are expected to be the most up to date version. These changes were phased in throughout 2015 for all active instruments.The on-line cache was populated instrument by instrument over the course of 2015. As data were placed in the cache, the flag that triggers OTFR was reset so that OTFR no longer runs on these data. "Hybrid" requests to the Archive are handled transparently, with data not yet in the cache provided via OTFR and the remaining data provided from the cache. Users do not need to make separate requests.Users of the MAST Portal will be able to download data from the cache immediately. For data not in the cache, the Portal will send the user to the standard "Retrieval Options Page," allowing the user to direct the Archive to process and deliver the data.The classic MAST Search and Retrieval interface has the same look and feel as previously. Minor changes, unrelated to the cache, have been made to the format of the Retrieval Options Page.

  12. A two-level cache for distributed information retrieval in search engines.

    PubMed

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  13. A Two-Level Cache for Distributed Information Retrieval in Search Engines

    PubMed Central

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache. PMID:24363621

  14. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  15. Forest resources of the Wasatch-Cache National Forest

    Treesearch

    Renee A. O' Brien; Jesse Pope

    1997-01-01

    The 1,215,219 acres in the Wasatch-Cache National Forest encompass 863,906 acres of forest land, made up of 90 percent (776,239 acres) "timberland" and 10 percent (87,667 acres) "woodland." The other 351,313 acres of the Wasatch-Cache are nonforest or water (fig. 1). This report discusses forest land only. In the Wasatch-Cache, 26 percent...

  16. Predictive Caching Using the TDAG Algorithm

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.

  17. Version pressure feedback mechanisms for speculative versioning caches

    DOEpatents

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  18. A Survey of Architectural Techniques For Improving Cache Power Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    Modern processors are using increasingly larger sized on-chip caches. Also, with each CMOS technology generation, there has been a significant increase in their leakage energy consumption. For this reason, cache power management has become a crucial research issue in modern processor design. To address this challenge and also meet the goals of sustainable computing, researchers have proposed several techniques for improving energy efficiency of cache architectures. This paper surveys recent architectural techniques for improving cache power efficiency and also presents a classification of these techniques based on their characteristics. For providing an application perspective, this paper also reviews several real-worldmore » processor chips that employ cache energy saving techniques. The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches.« less

  19. Clark’s Nutcrackers (Nucifraga columbiana) Flexibly Adapt Caching Behavior to a Cooperative Context

    PubMed Central

    Clary, Dawson; Kelly, Debbie M.

    2016-01-01

    Corvids recognize when their caches are at risk of being stolen by others and have developed strategies to protect these caches from pilferage. For instance, Clark’s nutcrackers will suppress the number of caches they make if being observed by a potential thief. However, cache protection has most often been studied using competitive contexts, so it is unclear whether corvids can adjust their caching in beneficial ways to accommodate non-competitive situations. Therefore, we examined whether Clark’s nutcrackers, a non-social corvid, would flexibly adapt their caching behaviors to a cooperative context. To do so, birds were given a caching task during which caches made by one individual were reciprocally exchanged for the caches of a partner bird over repeated trials. In this scenario, if caching behaviors can be flexibly deployed, then the birds should recognize the cooperative nature of the task and maintain or increase caching levels over time. However, if cache protection strategies are applied independent of social context and simply in response to cache theft, then cache suppression should occur. In the current experiment, we found that the birds maintained caching throughout the experiment. We report that males increased caching in response to a manipulation in which caches were artificially added, suggesting the birds could adapt to the cooperative nature of the task. Additionally, we show that caching decisions were not solely due to motivational factors, instead showing an additional influence attributed to the behavior of the partner bird. PMID:27826273

  20. dCache on Steroids - Delegated Storage Solutions

    DOE PAGES

    Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.; ...

    2017-11-23

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less

  1. dCache on Steroids - Delegated Storage Solutions

    NASA Astrophysics Data System (ADS)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  2. dCache on Steroids - Delegated Storage Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less

  3. No evidence for memory interference across sessions in food hoarding marsh tits Poecile palustris under laboratory conditions.

    PubMed

    Urhan, A Utku; Brodin, Anders

    2015-05-01

    Scatter hoarding birds are known for their accurate spatial memory. In a previous experiment, we tested the retrieval accuracy in marsh tits in a typical laboratory set-up for this species. We also tested the performance of humans in this experimental set-up. Somewhat unexpectedly, humans performed much better than marsh tits. In the first five attempts, humans relocated almost 90 % of the caches they had hidden 5 h earlier. Marsh tits only relocated 25 % in the first five attempts and just above 40 % in the first ten attempts. Typically, in this type of experiment, the birds will be caching and retrieving many times in the same sites in the same experimental room. This is very different from the conditions in nature where hoarding parids only cache once in a caching site. Hence, it is possible that memories from previous sessions will disturb the formation of new memories. If there is such proactive interference, the prediction is that success should decay over sessions. Here, we have designed an experiment to investigate whether there is such memory interference in this type of experiment. We allowed marsh tits and humans to cache and retrieve in three repeated sessions without prior experience of the arena. The performance did not change over sessions, and on average, marsh tits correctly visited around 25 % of the caches in the first five attempts. The corresponding success in humans was constant across sessions, and it was around 90 % on average. We conclude that the somewhat poor performance of the marsh tits did not depend on proactive memory interference. We also discuss other possible reasons for why marsh tits in general do not perform better in laboratory experiments.

  4. Xrootd in dCache - design and experiences

    NASA Astrophysics Data System (ADS)

    Behrmann, Gerd; Ozerov, Dmitry; Zangerl, Thomas

    2011-12-01

    dCache is a well established distributed storage solution used in both high energy physics computing and other disciplines. An overview of the implementation of the xrootd data access protocol within dCache is presented. The performance of various access mechanisms is studied and compared and it is concluded that our implementation is as perfomant as other protocols. This makes dCache a compelling alternative to the Scalla software suite implementation of xrootd, with added value from broad protocol support, including the IETF approved NFS 4.1 protocol.

  5. Assessment of TREM2 rs75932628 association with Alzheimer's disease in a population-based sample: the Cache County Study.

    PubMed

    Gonzalez Murcia, Josue D; Schmutz, Cameron; Munger, Caitlin; Perkes, Ammon; Gustin, Aaron; Peterson, Michael; Ebbert, Mark T W; Norton, Maria C; Tschanz, Joann T; Munger, Ronald G; Corcoran, Christopher D; Kauwe, John S K

    2013-12-01

    Recent studies have identified the rs75932628 (R47H) variant in TREM2 as an Alzheimer's disease risk factor with estimated odds ratio ranging from 2.9 to 5.1. The Cache County Memory Study is a large, population-based sample designed for the study of memory and aging. We genotyped R47H in 2974 samples (427 cases and 2540 control subjects) from the Cache County study using a custom TaqMan assay. We observed 7 heterozygous cases and 12 heterozygous control subjects with an odds ratio of 3.5 (95% confidence interval, 1.3-8.8; p = 0.0076). The minor allele frequency and population attributable fraction for R47H were 0.0029 and 0.004, respectively. This study replicates the association between R47H and Alzheimer's disease risk in a large, population-based sample, and estimates the population frequency and attributable risk of this rare variant. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Quantifying animal movement for caching foragers: the path identification index (PII) and cougars, Puma concolor.

    PubMed

    Ironside, Kirsten E; Mattson, David J; Theimer, Tad; Jansen, Brian; Holton, Brandon; Arundel, Terence; Peters, Michael; Sexton, Joseph O; Edwards, Thomas C

    2017-01-01

    Many studies of animal movement have focused on directed versus area-restricted movement, which rely on correlations between step-length and turn-angles and on stationarity through time to define behavioral states. Although these approaches might apply well to grazing in patchy landscapes, species that either feed for short periods on large, concentrated food sources or cache food exhibit movements that are difficult to model using the traditional metrics of turn-angle and step-length alone. We used GPS telemetry collected from a prey-caching predator, the cougar ( Puma concolor, Linnaeus ), to test whether combining metrics of site recursion, spatiotemporal clustering, speed, and turning into an index of movement using partial sums, improves the ability to identify caching behavior. The index was used to identify changes in movement characteristics over time and segment paths into behavioral classes. The identification of behaviors from the Path Identification Index (PII) was evaluated using field investigations of cougar activities at GPS locations. We tested for statistical stationarity across behaviors for use of topographic view-sheds. Changes in the frequency and duration of PII were useful for identifying seasonal activities such as migration, gestation, and denning. The comparison of field investigations of cougar activities to behavioral PII classes resulted in an overall classification accuracy of 81%. Changes in behaviors were reflected in cougars' use of topographic view-sheds, resulting in statistical nonstationarity over time, and revealed important aspects of hunting behavior. Incorporating metrics of site recursion and spatiotemporal clustering revealed the temporal structure in movements of a caching forager. The movement index PII, shows promise for identifying behaviors in species that frequently return to specific locations such as food caches, watering holes, or dens, and highlights the potential role memory and cognitive abilities play in

  7. Minimizing Cache Misses Using Minimum-Surface Bodies

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob; Biegel, Bryan (Technical Monitor)

    2002-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. First, we derive lower bounds which any algorithm must suffer while computing a local operator on a grid. Then we explore coverings of iteration spaces represented by structured and unstructured grids which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles of the interference lattice of the grid. We show that the covering has low surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of structured grids. On the other hand, we present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  8. PROSPECTIVE STUDY OF READY-TO-EAT BREAKFAST CEREAL CONSUMPTION AND COGNITIVE DECLINE AMONG ELDERLY MEN AND WOMEN IN CACHE COUNTY, UTAH, STUDY ON MEMORY, HEALTH, AND AGING

    PubMed Central

    WENGREEN, H.; NELSON, C.; MUNGER, R.G.; CORCORAN, C.

    2013-01-01

    Objective To examine associations between frequency of ready-to-eat-cereal (RTEC) consumption and cognitive function among elderly men and women of the Cache County Study on Memory and Aging in Utah. Design A population-based prospective cohort study established in Cache County, Utah in 1995. Setting and Participants 3831 men and women > 65 years of age who were living in Cache County, Utah in 1995. Measurement Diet was assessed using a 142-item food frequency questionnaire at baseline. Cognitive function was assessed using an adapted version of the Modified Mini-Mental State examination (3MS) at baseline and three subsequent interviews over 11 years. RTEC consumption was defined as daily, weekly, or infrequent use. Results In multivariable models, more frequent RTEC consumption was not associated with a cognitive benefit. Those consuming RTEC weekly but less than daily scored higher on their baseline 3MS than did those consuming RTEC more or less frequently (91.7, 90.6, 90.6, respectively; p-value <0.001). This association was maintained across 11 years of observation such that those consuming RTEC weekly but less than daily declined on average 3.96 points compared to an average 5.13 and 4.57 point decline for those consuming cereal more or less frequently (p-value = 0.0009). Conclusion Those consuming RTEC at least daily had poorer cognitive performance at baseline and over 11 years of follow-up compared to those who consumed cereal more or less frequently. RTEC is a nutrient dense food, but should not replace the consumption of other healthy foods in the diets’ of elderly people. Associations between RTEC consumption, dietary patterns, and cognitive function deserve further study. PMID:21369668

  9. Smart caching based on mobile agent of power WebGIS platform.

    PubMed

    Wang, Xiaohui; Wu, Kehe; Chen, Fei

    2013-01-01

    Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.

  10. Smart Caching Based on Mobile Agent of Power WebGIS Platform

    PubMed Central

    Wang, Xiaohui; Wu, Kehe; Chen, Fei

    2013-01-01

    Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved. PMID:24288504

  11. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  12. Study of cache performance in distributed environment for data processing

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal

    2014-06-01

    Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 - 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set.

  13. Value-Based Caching in Information-Centric Wireless Body Area Networks

    PubMed Central

    Al-Turjman, Fadi M.; Imran, Muhammad; Vasilakos, Athanasios V.

    2017-01-01

    We propose a resilient cache replacement approach based on a Value of sensed Information (VoI) policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation) and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs). These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures). These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures. PMID:28106817

  14. Effects of experience and social context on prospective caching strategies by scrub jays.

    PubMed

    Emery, N J; Clayton, N S

    2001-11-22

    Social life has costs associated with competition for resources such as food. Food storing may reduce this competition as the food can be collected quickly and hidden elsewhere; however, it is a risky strategy because caches can be pilfered by others. Scrub jays (Aphelocoma coerulescens) remember 'what', 'where' and 'when' they cached. Like other corvids, they remember where conspecifics have cached, pilfering them when given the opportunity, but may also adjust their own caching strategies to minimize potential pilfering. To test this, jays were allowed to cache either in private (when the other bird's view was obscured) or while a conspecific was watching, and then recover their caches in private. Here we show that jays with prior experience of pilfering another bird's caches subsequently re-cached food in new cache sites during recovery trials, but only when they had been observed caching. Jays without pilfering experience did not, even though they had observed other jays caching. Our results suggest that jays relate information about their previous experience as a pilferer to the possibility of future stealing by another bird, and modify their caching strategy accordingly.

  15. Novel dynamic caching for hierarchically distributed video-on-demand systems

    NASA Astrophysics Data System (ADS)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  16. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  17. Improved cache performance in Monte Carlo transport calculations using energy banding

    NASA Astrophysics Data System (ADS)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  18. Epidemiology of cognitive aging and Alzheimer's disease: contributions of the cache county utah study of memory, health and aging.

    PubMed

    Hayden, Kathleen M; Welsh-Bohmer, Kathleen A

    2012-01-01

    Epidemiological studies of Alzheimer's disease (AD) provide insights into changing public health trends and their contribution to disease incidence. The current chapter considers how the population-based approach has contributed to our understanding of lifetime exposures that contribute to later disease risk and may act to modify onset of symptoms. We focus on the findings from a recent survey of an exceptionally long-lived population, the Cache County Utah Study of Memory, Health, and Aging. This study is confined to a single geographic population has allowed estimation of the genetic and environmental influences on AD expression across the expected human lifespan of 95+ years. Given the emphasis of this text on the behavioral neurosciences of aging, we highlight within the current chapter the particular contributions of this population-based study to the neuropsychology of aging and AD. We also discuss hypotheses generated from this survey with respect to factors that may either accelerate or delay symptom onset in AD and the conditions that appear to be associated with successful cognitive aging.

  19. Improving Internet Archive Service through Proxy Cache.

    ERIC Educational Resources Information Center

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  20. A trace-driven analysis of name and attribute caching in a distributed system

    NASA Technical Reports Server (NTRS)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  1. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  2. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.

    PubMed

    Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok

    2016-06-25

    Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.

  3. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    NASA Astrophysics Data System (ADS)

    Ando, K.; Fujita, S.; Ito, J.; Yuasa, S.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.; Yoda, H.

    2014-05-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed.

  4. Seedling Establishment of Coast Live Oak in Relation to Seed Caching by Jays

    Treesearch

    Joe R. McBride; Ed Norberg; Sheauchi Cheng; Ahmad Mossadegh

    1991-01-01

    The purpose of this study was to simulate the caching of acorns by jays and rodents to see if less costly procedures could be developed for the establishment of coast live oak (Quercus agrifolia). Four treatments [(1) random - single acorn cache, (2) regular - single acorn cache, (3) regular - 5 acorn cache, (4) regular - 10 acorn cache] were planted...

  5. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  6. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks

    PubMed Central

    Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok

    2016-01-01

    Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities. PMID:27347975

  7. Load Balancing in Distributed Web Caching: A Novel Clustering Approach

    NASA Astrophysics Data System (ADS)

    Tiwari, R.; Kumar, K.; Khan, G.

    2010-11-01

    The World Wide Web suffers from scaling and reliability problems due to overloaded and congested proxy servers. Caching at local proxy servers helps, but cannot satisfy more than a third to half of requests; more requests are still sent to original remote origin servers. In this paper we have developed an algorithm for Distributed Web Cache, which incorporates cooperation among proxy servers of one cluster. This algorithm uses Distributed Web Cache concepts along with static hierarchies with geographical based clusters of level one proxy server with dynamic mechanism of proxy server during the congestion of one cluster. Congestion and scalability problems are being dealt by clustering concept used in our approach. This results in higher hit ratio of caches, with lesser latency delay for requested pages. This algorithm also guarantees data consistency between the original server objects and the proxy cache objects.

  8. Effects of simulated mountain lion caching on decomposition of ungulate carcasses

    USGS Publications Warehouse

    Bischoff-Mattson, Z.; Mattson, D.

    2009-01-01

    Caching of animal remains is common among carnivorous species of all sizes, yet the effects of caching on larger prey are unstudied. We conducted a summer field experiment designed to test the effects of simulated mountain lion (Puma concolor) caching on mass loss, relative temperature, and odor dissemination of 9 prey-like carcasses. We deployed all but one of the carcasses in pairs, with one of each pair exposed and the other shaded and shallowly buried (cached). Caching substantially reduced wastage during dry and hot (drought) but not wet and cool (monsoon) periods, and it also reduced temperature and discernable odor to some degree during both seasons. These results are consistent with the hypotheses that caching serves to both reduce competition from arthropods and microbes and reduce odds of detection by larger vertebrates such as bears (Ursus spp.), wolves (Canis lupus), or other lions.

  9. A Survey Of Techniques for Managing and Leveraging Caches in GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    2014-09-01

    Initially introduced as special-purpose accelerators for graphics applications, graphics processing units (GPUs) have now emerged as general purpose computing platforms for a wide range of applications. To address the requirements of these applications, modern GPUs include sizable hardware-managed caches. However, several factors, such as unique architecture of GPU, rise of CPU–GPU heterogeneous computing, etc., demand effective management of caches to achieve high performance and energy efficiency. Recently, several techniques have been proposed for this purpose. In this paper, we survey several architectural and system-level techniques proposed for managing and leveraging GPU caches. We also discuss the importance and challenges ofmore » cache management in GPUs. The aim of this paper is to provide the readers insights into cache management techniques for GPUs and motivate them to propose even better techniques for leveraging the full potential of caches in the GPUs of tomorrow.« less

  10. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing

    PubMed Central

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-01-01

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313

  11. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.

    PubMed

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-03-22

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.

  12. Quantifying animal movement for caching foragers: the path identification index (PII) and cougars, Puma concolor

    USGS Publications Warehouse

    Ironside, Kirsten E.; Mattson, David J.; Theimer, Tad; Jansen, Brian; Holton, Brandon; Arundel, Terry; Peters, Michael; Sexton, Joseph O.; Edwards, Thomas C.

    2017-01-01

    memory and cognitive abilities may play in determining animal movements. With the addition of measures capturing site recursion the temporal structure in movements of a caching forager was revealed.

  13. A Morphometric Assessment of the Intended Function of Cached Clovis Points

    PubMed Central

    Buchanan, Briggs; Kilby, J. David; Huckell, Bruce B.; O'Brien, Michael J.; Collard, Mark

    2012-01-01

    A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1) cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2) cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons. PMID:22348012

  14. Errors made by animals in memory paradigms are not always due to failure of memory.

    PubMed

    Wilkie, D M; Willson, R J; Carr, J A

    1999-01-01

    It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.

  15. Effective Padding of Multi-Dimensional Arrays to Avoid Cache Conflict Misses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Changwan; Bao, Wenlei; Cohen, Albert

    Caches are used to significantly improve performance. Even with high degrees of set-associativity, the number of accessed data elements mapping to the same set in a cache can easily exceed the degree of associativity, causing conflict misses and lowered performance, even if the working set is much smaller than cache capacity. Array padding (increasing the size of array dimensions) is a well known optimization technique that can reduce conflict misses. In this paper, we develop the first algorithms for optimal padding of arrays for a set associative cache for arbitrary tile sizes, In addition, we develop the first solution tomore » padding for nested tiles and multi-level caches. The techniques are in implemented in PAdvisor tool. Experimental results with multiple benchmarks demonstrate significant performance improvement from use of PAdvisor for padding.« less

  16. A Scalable proxy cache for Grid Data Access

    NASA Astrophysics Data System (ADS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  17. Analysis of cache for streaming tape drive

    NASA Technical Reports Server (NTRS)

    Chinnaswamy, V.

    1993-01-01

    A tape subsystem consists of a controller and a tape drive. Tapes are used for backup, data interchange, and software distribution. The backup operation is addressed. During a backup operation, data is read from disk, processed in CPU, and then sent to tape. The processing speeds of a disk subsystem, CPU, and a tape subsystem are likely to be different. A powerful CPU can read data from a fast disk, process it, and supply the data to the tape subsystem at a faster rate than the tape subsystem can handle. On the other hand, a slow disk drive and a slow CPU may not be able to supply data fast enough to keep a tape drive busy all the time. The backup process may supply data to tape drive in bursts. Each burst may be followed by an idle period. Depending on the nature of the file distribution in the disk, the input stream to the tape subsystem may vary significantly during backup. To compensate for these differences and optimize the utilization of a tape subsystem, a cache or buffer is introduced in the tape controller. Most of the tape drives today are streaming tape drives. A streaming tape drive goes into reposition when there is no data from the controller. Once the drive goes into reposition, the controller can receive data, but it cannot supply data to the tape drive until the drive completes its reposition. A controller can also receive data from the host and send data to the tape drive at the same time. The relationship of cache size, host transfer rate, drive transfer rate, reposition, and ramp up times for optimal performance of the tape subsystem are investigated. Formulas developed will also show the advantages of cache watermarks to increase the streaming time of the tape drive, maximum loss due to insufficient cache, tradeoffs between cache and reposition times and the effectiveness of cache on a streaming tape drive due to idle times or interruptions due in host transfers. Several mathematical formulas are developed to predict the performance of the tape

  18. Current desires of conspecific observers affect cache-protection strategies in California scrub-jays and Eurasian jays.

    PubMed

    Ostojić, Ljerka; Legg, Edward W; Brecht, Katharina F; Lange, Florian; Deininger, Chantal; Mendl, Michael; Clayton, Nicola S

    2017-01-23

    Many corvid species accurately remember the locations where they have seen others cache food, allowing them to pilfer these caches efficiently once the cachers have left the scene [1]. To protect their caches, corvids employ a suite of different cache-protection strategies that limit the observers' visual or acoustic access to the cache site [2,3]. In cases where an observer's sensory access cannot be reduced it has been suggested that cachers might be able to minimise the risk of pilfering if they avoid caching food the observer is most motivated to pilfer [4]. In the wild, corvids have been reported to pilfer others' caches as soon as possible after the caching event [5], such that the cacher might benefit from adjusting its caching behaviour according to the observer's current desire. In the current study, observers pilfered according to their current desire: they preferentially pilfered food that they were not sated on. Cachers adjusted their caching behaviour accordingly: they protected their caches by selectively caching food that observers were not motivated to pilfer. The same cache-protection behaviour was found when cachers could not see on which food the observers were sated. Thus, the cachers' ability to respond to the observer's desire might have been driven by the observer's behaviour at the time of caching. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Nature as a treasure map! Teaching geoscience with the help of earth caches?!

    NASA Astrophysics Data System (ADS)

    Zecha, Stefanie; Schiller, Thomas

    2015-04-01

    This presentation looks at how earth caches are influence the learning process in the field of geo science in non-formal education. The development of mobile technologies using Global Positioning System (GPS) data to point geographical location together with the evolving Web 2.0 supporting the creation and consumption of content, suggest a potential for collaborative informal learning linked to location. With the help of the GIS in smartphones you can go directly in nature, search for information by your smartphone, and learn something about nature. Earth caches are a very good opportunity, which are organized and supervised geocaches with special information about physical geography high lights. Interested people can inform themselves about aspects in geoscience area by earth caches. The main question of this presentation is how these caches are created in relation to learning processes. As is not possible, to analyze all existing earth caches, there was focus on Bavaria and a certain feature of earth caches. At the end the authors show limits and potentials for the use of earth caches and give some remark for the future.

  20. Efficient Cache use for Stencil Operations on Structured Discretization Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.

    2001-01-01

    We derive tight bounds on the cache misses for evaluation of explicit stencil operators on structured grids. Our lower bound is based on the isoperimetrical property of the discrete octahedron. Our upper bound is based on a good surface to volume ratio of a parallelepiped spanned by a reduced basis of the interference lattice of a grid. Measurements show that our algorithm typically reduces the number of cache misses by a factor of three, relative to a compiler optimized code. We show that stencil calculations on grids whose interference lattice have a short vector feature abnormally high numbers of cache misses. We call such grids unfavorable and suggest to avoid these in computations by appropriate padding. By direct measurements on a MIPS R10000 processor we show a good correlation between abnormally high numbers of cache misses and unfavorable three-dimensional grids.

  1. Organizing the pantry: cache management improves quality of overwinter food stores in a montane mammal

    USGS Publications Warehouse

    Jakopak, Rhiannon P.; Hall, L. Embere; Chalfoun, Anna D.

    2017-01-01

    Many mammals create food stores to enhance overwinter survival in seasonal environments. Strategic arrangement of food within caches may facilitate the physical integrity of the cache or improve access to high-quality food to ensure that cached resources meet future nutritional demands. We used the American pika (Ochotona princeps), a food-caching lagomorph, to evaluate variation in haypile (cache) structure (i.e., horizontal layering by plant functional group) in Wyoming, United States. Fifty-five percent of 62 haypiles contained at least 2 discrete layers of vegetation. Adults and juveniles layered haypiles in similar proportions. The probability of layering increased with haypile volume, but not haypile number per individual or nearby forage diversity. Vegetation cached in layered haypiles was also higher in nitrogen compared to vegetation in unlayered piles. We found that American pikas frequently structured their food caches, structured caches were larger, and the cached vegetation in structured piles was of higher nutritional quality. Improving access to stable, high-quality vegetation in haypiles, a critical overwinter food resource, may allow individuals to better persist amidst harsh conditions.

  2. Winter prey caching by northern hawk owls in Minnesota

    Treesearch

    Richard R. Schaefer; D. Craig Rudolph; Jesse F. Fagan

    2007-01-01

    Northern Hawk Owls (Surnia ulula) have been reported to cache prey during the breeding season for later consumption, but detailed reports of prey caching during the non-breeding season are comparatively rare. We provided prey to four individual Northern Hawk Owls in wintering areas in northeastern Minnesota during 2001 and 2005 and observed their...

  3. Tier 3 batch system data locality via managed caches

    NASA Astrophysics Data System (ADS)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  4. Locality in Search Engine Queries and Its Implications for Caching

    DTIC Science & Technology

    2001-05-01

    in the question of whether caching might be effective for search engines as well. They study two real search engine traces by examining query...locality and its implications for caching. The two search engines studied are Vivisimo and Excite. Their trace analysis results show that queries have

  5. Constant time worker thread allocation via configuration caching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichenberger, Alexandre E; O'Brien, John K. P.

    Mechanisms are provided for allocating threads for execution of a parallel region of code. A request for allocation of worker threads to execute the parallel region of code is received from a master thread. Cached thread allocation information identifying prior thread allocations that have been performed for the master thread are accessed. Worker threads are allocated to the master thread based on the cached thread allocation information. The parallel region of code is executed using the allocated worker threads.

  6. Cliff swallows Petrochelidon pyrrhonota as bioindicators of environmental mercury, Cache Creek Watershed, California

    USGS Publications Warehouse

    Hothem, Roger L.; Trejo, Bonnie S.; Bauer, Marissa L.; Crayon, John J.

    2008-01-01

    To evaluate mercury (Hg) and other element exposure in cliff swallows (Petrochelidon pyrrhonota), eggs were collected from 16 sites within the mining-impacted Cache Creek watershed, Colusa, Lake, and Yolo counties, California, USA, in 1997-1998. Nestlings were collected from seven sites in 1998. Geometric mean total Hg (THg) concentrations ranged from 0.013 to 0.208 ??g/g wet weight (ww) in cliff swallow eggs and from 0.047 to 0.347 ??g/g ww in nestlings. Mercury detected in eggs generally followed the spatial distribution of Hg in the watershed based on proximity to both anthropogenic and natural sources. Mean Hg concentrations in samples of eggs and nestlings collected from sites near Hg sources were up to five and seven times higher, respectively, than in samples from reference sites within the watershed. Concentrations of other detected elements, including aluminum, beryllium, boron, calcium, manganese, strontium, and vanadium, were more frequently elevated at sites near Hg sources. Overall, Hg concentrations in eggs from Cache Creek were lower than those reported in eggs of tree swallows (Tachycineta bicolor) from highly contaminated locations in North America. Total Hg concentrations were lower in all Cache Creek egg samples than adverse effects levels established for other species. Total Hg concentrations in bullfrogs (Rana catesbeiana) and foothill yellow-legged frogs (Rana boylii) collected from 10 of the study sites were both positively correlated with THg concentrations in cliff swallow eggs. Our data suggest that cliff swallows are reliable bioindicators of environmental Hg. ?? Springer Science+Business Media, LLC 2007.

  7. California scrub-jays reduce visual cues available to potential pilferers by matching food colour to caching substrate.

    PubMed

    Kelley, Laura A; Clayton, Nicola S

    2017-07-01

    Some animals hide food to consume later; however, these caches are susceptible to theft by conspecifics and heterospecifics. Caching animals can use protective strategies to minimize sensory cues available to potential pilferers, such as caching in shaded areas and in quiet substrate. Background matching (where object patterning matches the visual background) is commonly seen in prey animals to reduce conspicuousness, and caching animals may also use this tactic to hide caches, for example, by hiding coloured food in a similar coloured substrate. We tested whether California scrub-jays ( Aphelocoma californica ) camouflage their food in this way by offering them caching substrates that either matched or did not match the colour of food available for caching. We also determined whether this caching behaviour was sensitive to social context by allowing the birds to cache when a conspecific potential pilferer could be both heard and seen (acoustic and visual cues present), or unseen (acoustic cues only). When caching events could be both heard and seen by a potential pilferer, birds cached randomly in matching and non-matching substrates. However, they preferentially hid food in the substrate that matched the food colour when only acoustic cues were present. This is a novel cache protection strategy that also appears to be sensitive to social context. We conclude that studies of cache protection strategies should consider the perceptual capabilities of the cacher and potential pilferers. © 2017 The Author(s).

  8. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    PubMed Central

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  9. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  10. On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on Hulu

    NASA Astrophysics Data System (ADS)

    Krishnappa, Dilip Kumar; Khemmarat, Samamon; Gao, Lixin; Zink, Michael

    Lately researchers are looking at ways to reduce the delay on video playback through mechanisms like prefetching and caching for Video-on-Demand (VoD) services. The usage of prefetching and caching also has the potential to reduce the amount of network bandwidth usage, as most popular requests are served from a local cache rather than the server containing the original content. In this paper, we investigate the advantages of having such a prefetching and caching scheme for a free hosting service of professionally created video (movies and TV shows) named "hulu". We look into the advantages of using a prefetching scheme where the most popular videos of the week, as provided by the hulu website, are prefetched and compare this approach with a conventional LRU caching scheme with limited storage space and a combined scheme of prefetching and caching. Results from our measurement and analysis shows that employing a basic caching scheme at the proxy yields a hit ratio of up to 77.69%, but requires storage of about 236GB. Further analysis shows that a prefetching scheme where the top-100 popular videos of the week are downloaded to the proxy yields a hit ratio of 44% with a storage requirement of 10GB. A LRU caching scheme with a storage limitation of 20GB can achieve a hit ratio of 55% but downloads 4713 videos to achieve such high hit ratio compared to 100 videos in prefetching scheme, whereas a scheme with both prefetching and caching with the same storage yields a hit ratio of 59% with download requirement of 4439 videos. We find that employing a scheme of prefetching along with caching with trade-off on the storage will yield a better hit ratio and bandwidth saving than individual caching or prefetching schemes.

  11. 44 CFR 208.24 - Purchase and maintenance of items not listed on Equipment Cache List.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... items not listed on Equipment Cache List. 208.24 Section 208.24 Emergency Management and Assistance... of items not listed on Equipment Cache List. (a) Requests for purchase or maintenance of equipment and supplies not appearing on the Equipment Cache List, or that exceed the number specified in the...

  12. Some pitfalls in measuring memory in animals.

    PubMed

    Thorpe, Christina M; Jacova, Claudia; Wilkie, Donald M

    2004-11-01

    Because the presence or absence of memories in the brain cannot be directly observed, scientists must rely on indirect measures and use inferential reasoning to make statements about the status of memories. In humans, memories are often accessed through spoken or written language. In animals, memory is accessed through overt behaviours such as running down an arm in a maze, pressing a lever, or visiting a food cache site. Because memory is measured by these indirect methods, errors in the veracity of statements about memory can occur. In this brief paper, we identify three areas that may serve as pitfalls in reasoning about memory in animals: (1) the presence of 'silent associations', (2) intrusions of species-typical behaviours on memory tasks, and (3) improper mapping between human and animals memory tasks. There are undoubtedly other areas in which scientists should act cautiously when reasoning about the status of memory.

  13. Random Fill Cache Architecture (Preprint)

    DTIC Science & Technology

    2014-10-01

    a concrete example, we show how the cache collision attack works to extract the AES encryption keys (e.g., in the OpenSSL implementation of AES). AES...each round are implemented as table lookups for performance reasons. OpenSSL uses ten 1-KB lookup tables, five for encryption and five for decryption

  14. dCache, Sync-and-Share for Big Data

    NASA Astrophysics Data System (ADS)

    Millar, AP; Fuhrmann, P.; Mkrtchyan, T.; Behrmann, G.; Bernardt, C.; Buchholz, Q.; Guelzow, V.; Litvintsev, D.; Schwank, K.; Rossi, A.; van der Reest, P.

    2015-12-01

    The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two worlds have little overlap in user authentication and access protocols. While traditional storage technologies, popular in HEP, are based on X.509, cloud services and sync-and-share software technologies are generally based on username/password authentication or mechanisms like SAML or Open ID Connect. Similarly, data access models offered by both are somewhat different, with sync-and-share services often using proprietary protocols. As both approaches are very attractive, dCache.org developed a hybrid system, providing the best of both worlds. To avoid reinventing the wheel, dCache.org decided to embed another Open Source project: OwnCloud. This offers the required modern access capabilities but does not support the managed data functionality needed for large capacity data storage. With this hybrid system, scientists can share files and synchronize their data with laptops or mobile devices as easy as with any other cloud storage service. On top of this, the same data can be accessed via established mechanisms, like GridFTP to serve the Globus Transfer Service or the WLCG FTS3 tool, or the data can be made available to worker nodes or HPC applications via a mounted filesystem. As dCache provides a flexible authentication module, the same user can access its storage via different authentication mechanisms; e.g., X.509 and SAML. Additionally, users can specify the desired quality of service or trigger media transitions as necessary, thus tuning data access latency to the planned access profile. Such features are a natural consequence of using dCache. We will describe the design of

  15. Cache Locality Optimization for Recursive Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Krishnamoorthy, Sriram

    We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less

  16. Library API for Z-Order Memory Layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes

    This library provides a simple-to-use API for implementing an altnerative to traditional row-major order in-memory layout, one based on a Morton- order space filling curve (SFC) , specifically, a Z-order variant of the Morton order curve. The library enables programmers to, after a simple initialization step, to convert a multidimensional array from row-major to Z- order layouts, then use a single, generic API call to access data from any arbitrary (i,j,k) location from within the array, whether it it be stored in row- major or z-order format. The motivation for using a SFC in-memory layout is for improved spatial locality,more » which results in increased use of local high speed cache memory. The basic idea is that with row-major order layouts, a data access to some location that is nearby in index space is likely far away in physical memory, resulting in poor spatial locality and slow runtime. On the other hand, with a SFC-based layout, accesses that are nearby in index space are much more likely to also be nearby in physical memory, resulting in much better spatial locality, and better runtime performance. Numerous studies over the years have shown significant runtime performance gains are realized by using a SFC-based memory layout compared to a row-major layout, sometimes by as much as 50%, which result from the better use of the memory and cache hierarchy that are attendant with a SFC-based layout (see, for example, [Beth2012]). This library implementation is intended for use with codes that work with structured, array-based data in 2 or 3 dimensions. It is not appropriate for use with unstructured or point-based data.« less

  17. Communism and the meaning of social memory: towards a critical-interpretive approach.

    PubMed

    Tileagă, Cristian

    2012-12-01

    Using a case study of representations of communism in Romania, the paper offers a sketch of a critical-interpretive approach for exploring and engaging with the social memory of communism. When one considers the various contemporary appraisals, responses to and positions towards the communist period one identifies and one is obliged to deal with a series of personal and collective moral/political quandaries. In their attempt to bring about historical justice, political elites create a world that conforms more to their needs and desires than to the diversity of meanings of communism, experiences and dilemmas of lay people. This paper argues that one needs to study formal aspects of social memory as well as "lived", often conflicting, attitudinal and mnemonic stances and interpretive frameworks. One needs to strive to find the meaning of the social memory of communism in the sometimes contradictory, paradoxical attitudes and meanings that members of society communicate, endorse and debate. Many of the ethical quandaries and dilemmas of collective memory and recent history can be better understood by describing the discursive and sociocultural processes of meaning-making and meaning-interpretation carried out by members of a polity.

  18. Reducing the stochasticity of crystal nucleation to enable subnanosecond memory writing

    NASA Astrophysics Data System (ADS)

    Rao, Feng; Ding, Keyuan; Zhou, Yuxing; Zheng, Yonghui; Xia, Mengjiao; Lv, Shilong; Song, Zhitang; Feng, Songlin; Ronneberger, Ider; Mazzarello, Riccardo; Zhang, Wei; Ma, Evan

    2017-12-01

    Operation speed is a key challenge in phase-change random-access memory (PCRAM) technology, especially for achieving subnanosecond high-speed cache memory. Commercialized PCRAM products are limited by the tens of nanoseconds writing speed, originating from the stochastic crystal nucleation during the crystallization of amorphous germanium antimony telluride (Ge2Sb2Te5). Here, we demonstrate an alloying strategy to speed up the crystallization kinetics. The scandium antimony telluride (Sc0.2Sb2Te3) compound that we designed allows a writing speed of only 700 picoseconds without preprogramming in a large conventional PCRAM device. This ultrafast crystallization stems from the reduced stochasticity of nucleation through geometrically matched and robust scandium telluride (ScTe) chemical bonds that stabilize crystal precursors in the amorphous state. Controlling nucleation through alloy design paves the way for the development of cache-type PCRAM technology to boost the working efficiency of computing systems.

  19. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC

  20. dCache: Big Data storage for HEP communities and beyond

    NASA Astrophysics Data System (ADS)

    Millar, A. P.; Behrmann, G.; Bernardt, C.; Fuhrmann, P.; Litvintsev, D.; Mkrtchyan, T.; Petersen, A.; Rossi, A.; Schwank, K.

    2014-06-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  1. Using shadow page cache to improve isolated drivers performance.

    PubMed

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  2. Using Shadow Page Cache to Improve Isolated Drivers Performance

    PubMed Central

    Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373

  3. Predictive Cache Modeling and Analysis

    DTIC Science & Technology

    2011-11-01

    metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing

  4. CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms

    PubMed Central

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.

    2011-01-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404

  5. Data Resilience in the dCache Storage System

    DOE PAGES

    Rossi, A. L.; Adeyemi, F.; Ashish, A.; ...

    2017-11-23

    In this study we discuss design, implementation considerations, and performance of a new Resilience Service in the dCache storage system responsible for file availability and durability functionality.

  6. Parallel k-means++ for Multiple Shared-Memory Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackey, Patrick S.; Lewis, Robert R.

    2016-09-22

    In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varyingmore » data sizes.« less

  7. Basic perceptual changes that alter meaning and neural correlates of recognition memory

    PubMed Central

    Gao, Chuanji; Hermiller, Molly S.; Voss, Joel L.; Guo, Chunyan

    2015-01-01

    It is difficult to pinpoint the border between perceptual and conceptual processing, despite their treatment as distinct entities in many studies of recognition memory. For instance, alteration of simple perceptual characteristics of a stimulus can radically change meaning, such as the color of bread changing from white to green. We sought to better understand the role of perceptual and conceptual processing in memory by identifying the effects of changing a basic perceptual feature (color) on behavioral and neural correlates of memory in circumstances when this change would be expected to either change the meaning of a stimulus or to have no effect on meaning (i.e., to influence conceptual processing or not). Abstract visual shapes (“squiggles”) were colorized during study and presented during test in either the same color or a different color. Those squiggles that subjects found to resemble meaningful objects supported behavioral measures of conceptual priming, whereas meaningless squiggles did not. Further, changing color from study to test had a selective effect on behavioral correlates of priming for meaningful squiggles, indicating that color change altered conceptual processing. During a recognition memory test, color change altered event-related brain potential (ERP) correlates of memory for meaningful squiggles but not for meaningless squiggles. Specifically, color change reduced the amplitude of frontally distributed N400 potentials (FN400), implying that these potentials indicated conceptual processing during recognition memory that was sensitive to color change. In contrast, color change had no effect on FN400 correlates of recognition for meaningless squiggles, which were overall smaller in amplitude than for meaningful squiggles (further indicating that these potentials signal conceptual processing during recognition). Thus, merely changing the color of abstract visual shapes can alter their meaning, changing behavioral and neural correlates of

  8. Mitochondrial genomic variation associated with higher mitochondrial copy number: the Cache County Study on Memory Health and Aging.

    PubMed

    Ridge, Perry G; Maxwell, Taylor J; Foutz, Spencer J; Bailey, Matthew H; Corcoran, Christopher D; Tschanz, JoAnn T; Norton, Maria C; Munger, Ronald G; O'Brien, Elizabeth; Kerber, Richard A; Cawthon, Richard M; Kauwe, John S K

    2014-01-01

    The mitochondria are essential organelles and are the location of cellular respiration, which is responsible for the majority of ATP production. Each cell contains multiple mitochondria, and each mitochondrion contains multiple copies of its own circular genome. The ratio of mitochondrial genomes to nuclear genomes is referred to as mitochondrial copy number. Decreases in mitochondrial copy number are known to occur in many tissues as people age, and in certain diseases. The regulation of mitochondrial copy number by nuclear genes has been studied extensively. While mitochondrial variation has been associated with longevity and some of the diseases known to have reduced mitochondrial copy number, the role that the mitochondrial genome itself has in regulating mitochondrial copy number remains poorly understood. We analyzed the complete mitochondrial genomes from 1007 individuals randomly selected from the Cache County Study on Memory Health and Aging utilizing the inferred evolutionary history of the mitochondrial haplotypes present in our dataset to identify sequence variation and mitochondrial haplotypes associated with changes in mitochondrial copy number. Three variants belonging to mitochondrial haplogroups U5A1 and T2 were significantly associated with higher mitochondrial copy number in our dataset. We identified three variants associated with higher mitochondrial copy number and suggest several hypotheses for how these variants influence mitochondrial copy number by interacting with known regulators of mitochondrial copy number. Our results are the first to report sequence variation in the mitochondrial genome that causes changes in mitochondrial copy number. The identification of these variants that increase mtDNA copy number has important implications in understanding the pathological processes that underlie these phenotypes.

  9. In-memory interconnect protocol configuration registers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Kevin Y.; Roberts, David A.

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mappingmore » decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.« less

  10. Prolonging the arctic pulse: long-term exploitation of cached eggs by arctic foxes when lemmings are scarce.

    PubMed

    Samelius, Gustaf; Alisauskas, Ray T; Hobson, Keith A; Larivière, Serge

    2007-09-01

    1. Many ecosystems are characterized by pulses of dramatically higher than normal levels of foods (pulsed resources) to which animals often respond by caching foods for future use. However, the extent to which animals use cached foods and how this varies in relation to fluctuations in other foods is poorly understood in most animals. 2. Arctic foxes Alopex lagopus (L.) cache thousands of eggs annually at large goose colonies where eggs are often superabundant during the nesting period by geese. We estimated the contribution of cached eggs to arctic fox diets in spring and autumn, when geese were not present in the study area, by comparing stable isotope ratios (delta(13)C and delta(15)N) of fox tissues with those of their foods using a multisource mixing model in Program IsoSource. 3. The contribution of cached eggs to arctic fox diets was inversely related to collared lemming Dicrostonyx groenlandicus (Traill) abundance; the contribution of cached eggs to overall fox diets increased from < 28% in years when collared lemmings were abundant to 30-74% in years when collared lemmings were scarce. 4. Further, arctic foxes used cached eggs well into the following spring (almost 1 year after eggs were acquired) - a pattern that differs from that of carnivores generally storing foods for only a few days before consumption. 5. This study showed that long-term use of eggs that were cached when geese were superabundant at the colony in summer varied with fluctuations in collared lemming abundance (a key component in arctic fox diets throughout most of their range) and suggests that cached eggs functioned as a buffer when collared lemmings were scarce.

  11. The Development of Caching and Object Permanence in Western Scrub-Jays (Aphelocoma californica): Which Emerges First?

    PubMed Central

    Salwiczek, Lucie H.; Schlinger, Barney; Emery, Nathan J.; Clayton, Nicola S.

    2010-01-01

    Recent studies on the food-caching behavior of corvids have revealed complex physical and social skills, yet little is known about the ontogeny of food caching in relation to the development of cognitive capacities. Piagetian object permanence is the understanding that objects continue to exist even when they are no longer visible. Here, the authors focus on Piagetian Stages 3 and 4, because they are hallmarks in the cognitive development of both young children and animals. Our aim is to determine in a food-caching corvid, the Western scrub-jay, whether (1) Piagetian Stage 4 competence and tentative caching (i.e., hiding an item invisibly and retrieving it without delay), emerge concomitantly or consecutively; (2) whether experiencing the reappearance of hidden objects enhances the timing of the appearance of object permanence; and (3) discuss how the development of object permanence is related to behavioral development and sensorimotor intelligence. Our findings suggest that object permanence Stage 4 emerges before tentative caching, and independent of environmental influences, but that once the birds have developed simple object-permanence, then social learning might advance the interval after which tentative caching commences. PMID:19685971

  12. The development of caching and object permanence in Western scrub-jays (Aphelocoma californica): which emerges first?

    PubMed

    Salwiczek, Lucie H; Emery, Nathan J; Schlinger, Barney; Clayton, Nicola S

    2009-08-01

    Recent studies on the food-caching behavior of corvids have revealed complex physical and social skills, yet little is known about the ontogeny of food caching in relation to the development of cognitive capacities. Piagetian object permanence is the understanding that objects continue to exist even when they are no longer visible. Here, the authors focus on Piagetian Stages 3 and 4, because they are hallmarks in the cognitive development of both young children and animals. Our aim is to determine in a food-caching corvid, the Western scrub-jay, whether (1) Piagetian Stage 4 competence and tentative caching (i.e., hiding an item invisibly and retrieving it without delay), emerge concomitantly or consecutively; (2) whether experiencing the reappearance of hidden objects enhances the timing of the appearance of object permanence; and (3) discuss how the development of object permanence is related to behavioral development and sensorimotor intelligence. Our findings suggest that object permanence Stage 4 emerges before tentative caching, and independent of environmental influences, but that once the birds have developed simple object-permanence, then social learning might advance the interval after which tentative caching commences. Copyright 2009 APA, all rights reserved.

  13. Caching Joint Shortcut Routing to Improve Quality of Service for Information-Centric Networking.

    PubMed

    Huang, Baixiang; Liu, Anfeng; Zhang, Chengyuan; Xiong, Naixue; Zeng, Zhiwen; Cai, Zhiping

    2018-05-29

    Hundreds of thousands of ubiquitous sensing (US) devices have provided an enormous number of data for Information-Centric Networking (ICN), which is an emerging network architecture that has the potential to solve a great variety of issues faced by the traditional network. A Caching Joint Shortcut Routing (CJSR) scheme is proposed in this paper to improve the Quality of service (QoS) for ICN. The CJSR scheme mainly has two innovations which are different from other in-network caching schemes: (1) Two routing shortcuts are set up to reduce the length of routing paths. Because of some inconvenient transmission processes, the routing paths of previous schemes are prolonged, and users can only request data from Data Centers (DCs) until the data have been uploaded from Data Producers (DPs) to DCs. Hence, the first kind of shortcut is built from DPs to users directly. This shortcut could release the burden of whole network and reduce delay. Moreover, in the second shortcut routing method, a Content Router (CR) which could yield shorter length of uploading routing path from DPs to DCs is chosen, and then data packets are uploaded through this chosen CR. In this method, the uploading path shares some segments with the pre-caching path, thus the overall length of routing paths is reduced. (2) The second innovation of the CJSR scheme is that a cooperative pre-caching mechanism is proposed so that QoS could have a further increase. Besides being used in downloading routing, the pre-caching mechanism can also be used when data packets are uploaded towards DCs. Combining uploading and downloading pre-caching, the cooperative pre-caching mechanism exhibits high performance in different situations. Furthermore, to address the scarcity of storage size, an algorithm that could make use of storage from idle CRs is proposed. After comparing the proposed scheme with five existing schemes via simulations, experiments results reveal that the CJSR scheme could reduce the total number of

  14. Planetary Sample Caching System Design Options

    NASA Technical Reports Server (NTRS)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  15. Untangling elevation-related differences in the hippocampus in food-caching mountain chickadees: the effect of a uniform captive environment.

    PubMed

    Freas, C A; Bingman, K; Ladage, L D; Pravosudov, V V

    2013-01-01

    Variation in environmental conditions associated with differential selection on spatial memory has been hypothesized to result in evolutionary changes in the morphology of the hippocampus, a brain region involved in spatial memory. At the same time, it is well known that the morphology of the hippocampus might also be directly affected by environmental conditions. Understanding the role of environment-based plasticity is therefore critical when investigating potential adaptive evolutionary changes in the hippocampus associated with environmental variation. We previously demonstrated large elevation-related variation in hippocampus morphology in mountain chickadees over an extremely small spatial scale. We hypothesized that this variation is related to differential selection pressures associated with differences in winter climate severity along an elevation gradient, which make different demands on spatial memory used for food cache retrieval. Here, we tested whether such variation is experience based, generated by potential differences in the environment, by comparing the hippocampus morphology of chickadees from different elevations maintained in a uniform captive environment in a laboratory with those sampled directly from the wild. In addition, we compared hippocampal neuron soma size in chickadees sampled directly from the wild with those maintained in laboratory conditions with restricted and unrestricted spatial memory use via manipulation of food-caching experiences to test whether memory use can affect neuron soma size. There were significant elevation-related differences in hippocampus volume and the total number of hippocampal neurons, but not in neuron soma size, in captive birds. Captive environmental conditions were associated with a large reduction in hippocampus volume and neuron soma size, but not in the total number of neurons or in neuron soma size in other telencephalic regions. Restriction of memory use while in laboratory conditions produced no

  16. Mammal caching of oak acorns in a red pine and a mixed oak stand

    Treesearch

    E.R. Thorn; W.M. Tzilkowski

    1991-01-01

    Small mammal caching of oak (Quercus spp.) acorns in adjacent red pine (Pinus resinosa) and mixed-oak stands was investigated at The Penn State Experimental Forest, Huntingdon Co., Pennsylvania. Gray squirrels (Sciurus carolinensis) and mice (Peromyscus spp.) were the most common acorn-caching...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Lingda; Hayes, Ari; Song, Shuaiwen

    Modern GPUs employ cache to improve memory system efficiency. However, large amount of cache space is underutilized due to irregular memory accesses and poor spatial locality which exhibited commonly in GPU applications. Our experiments show that using smaller cache lines could improve cache space utilization, but it also frequently suffers from significant performance loss by introducing large amount of extra cache requests. In this work, we propose a novel cache design named tag-split cache (TSC) that enables fine-grained cache storage to address the problem of cache space underutilization while keeping memory request number unchanged. TSC divides tag into two partsmore » to reduce storage overhead, and it supports multiple cache line replacement in one cycle.« less

  18. Dementia diagnoses from clinical and neuropsychological data compared: the Cache County study.

    PubMed

    Tschanz, J T; Welsh-Bohmer, K A; Skoog, I; West, N; Norton, M C; Wyse, B W; Nickles, R; Breitner, J C

    2000-03-28

    To validate a neuropsychological algorithm for dementia diagnosis. We developed a neuropsychological algorithm in a sample of 1,023 elderly residents of Cache County, UT. We compared algorithmic and clinical dementia diagnoses both based on DSM-III-R criteria. The algorithm diagnosed dementia when there was impairment in memory and at least one other cognitive domain. We also tested a variant of the algorithm that incorporated functional measures that were based on structured informant reports. Of 1,023 participants, 87% could be classified by the basic algorithm, 94% when functional measures were considered. There was good concordance between basic psychometric and clinical diagnoses (79% agreement, kappa = 0.57). This improved after incorporating functional measures (90% agreement, kappa = 0.76). Neuropsychological algorithms may reasonably classify individuals on dementia status across a range of severity levels and ages and may provide a useful adjunct to clinical diagnoses in population studies.

  19. A new intuitionism: Meaning, memory, and development in Fuzzy-Trace Theory.

    PubMed

    Reyna, Valerie F

    2012-05-01

    Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health.

  20. Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System

    PubMed Central

    Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi

    2015-01-01

    WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication. PMID:26351658

  1. Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System.

    PubMed

    Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi

    2015-01-01

    WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.

  2. Design issues and caching strategies for CD-ROM-based multimedia storage

    NASA Astrophysics Data System (ADS)

    Shastri, Vijnan; Rajaraman, V.; Jamadagni, H. S.; Venkat-Rangan, P.; Sampath-Kumar, Srihari

    1996-03-01

    CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

  3. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    NASA Astrophysics Data System (ADS)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  4. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  5. CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.

    PubMed

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W

    2012-06-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  6. A new intuitionism: Meaning, memory, and development in Fuzzy-Trace Theory

    PubMed Central

    Reyna, Valerie F.

    2014-01-01

    Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health. PMID:25530822

  7. The future of memory

    NASA Astrophysics Data System (ADS)

    Marinella, M.

    In the not too distant future, the traditional memory and storage hierarchy of may be replaced by a single Storage Class Memory (SCM) device integrated on or near the logic processor. Traditional magnetic hard drives, NAND flash, DRAM, and higher level caches (L2 and up) will be replaced with a single high performance memory device. The Storage Class Memory paradigm will require high speed (< 100 ns read/write), excellent endurance (> 1012), nonvolatility (retention > 10 years), and low switching energies (< 10 pJ per switch). The International Technology Roadmap for Semiconductors (ITRS) has recently evaluated several potential candidates SCM technologies, including Resistive (or Redox) RAM, Spin Torque Transfer RAM (STT-MRAM), and phase change memory (PCM). All of these devices show potential well beyond that of current flash technologies and research efforts are underway to improve the endurance, write speeds, and scalabilities to be on-par with DRAM. This progress has interesting implications for space electronics: each of these emerging device technologies show excellent resistance to the types of radiation typically found in space applications. Commercially developed, high density storage class memory-based systems may include a memory that is physically radiation hard, and suitable for space applications without major shielding efforts. This paper reviews the Storage Class Memory concept, emerging memory devices, and possible applicability to radiation hardened electronics for space.

  8. Respiratory hospital admissions associated with PM10 pollution in Utah, Salt Lake, and Cache Valleys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope CA, I.I.I.

    This study assessed the association between respiratory hospital admissions and PM10 pollution in Utah, Salt Lake, and Cache valleys during April 1985 through March 1989. Utah and Salt Lake valleys had high levels of PM10 pollution that violated both the annual and 24-h standards issued by the Environmental Protection Agency (EPA). Much lower PM10 levels occurred in the Cache Valley. Utah Valley experienced the intermittent operation of its primary source of PM10 pollution: an integrated steel mill. Bronchitis and asthma admissions for preschool-age children were approximately twice as frequent in Utah Valley when the steel mill was operating versus whenmore » it was not. Similar differences were not observed in Salt Lake or Cache valleys. Even though Cache Valley had higher smoking rates and lower temperatures in winter than did Utah Valley, per capita bronchitis and asthma admissions for all ages were approximately twice as high in Utah Valley. During the period when the steel mill was closed, differences in per capita admissions between Utah and Cache valleys narrowed considerably. Regression analysis also demonstrated a statistical association between respiratory hospital admissions and PM10 pollution. The results suggest that PM10 pollution plays a role in the incidence and severity of respiratory disease.« less

  9. Reducing the stochasticity of crystal nucleation to enable subnanosecond memory writing.

    PubMed

    Rao, Feng; Ding, Keyuan; Zhou, Yuxing; Zheng, Yonghui; Xia, Mengjiao; Lv, Shilong; Song, Zhitang; Feng, Songlin; Ronneberger, Ider; Mazzarello, Riccardo; Zhang, Wei; Ma, Evan

    2017-12-15

    Operation speed is a key challenge in phase-change random-access memory (PCRAM) technology, especially for achieving subnanosecond high-speed cache memory. Commercialized PCRAM products are limited by the tens of nanoseconds writing speed, originating from the stochastic crystal nucleation during the crystallization of amorphous germanium antimony telluride (Ge 2 Sb 2 Te 5 ). Here, we demonstrate an alloying strategy to speed up the crystallization kinetics. The scandium antimony telluride (Sc 0.2 Sb 2 Te 3 ) compound that we designed allows a writing speed of only 700 picoseconds without preprogramming in a large conventional PCRAM device. This ultrafast crystallization stems from the reduced stochasticity of nucleation through geometrically matched and robust scandium telluride (ScTe) chemical bonds that stabilize crystal precursors in the amorphous state. Controlling nucleation through alloy design paves the way for the development of cache-type PCRAM technology to boost the working efficiency of computing systems. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  10. Visits, Hits, Caching and Counting on the World Wide Web: Old Wine in New Bottles?

    ERIC Educational Resources Information Center

    Berthon, Pierre; Pitt, Leyland; Prendergast, Gerard

    1997-01-01

    Although web browser caching speeds up retrieval, reduces network traffic, and decreases the load on servers and browser's computers, an unintended consequence for marketing research is that Web servers undercount hits. This article explores counting problems, caching, proxy servers, trawler software and presents a series of correction factors…

  11. Exploitation of pocket gophers and their food caches by grizzly bears

    USGS Publications Warehouse

    Mattson, D.J.

    2004-01-01

    I investigated the exploitation of pocket gophers (Thomomys talpoides) by grizzly bears (Ursus arctos horribilis) in the Yellowstone region of the United States with the use of data collected during a study of radiomarked bears in 1977-1992. My analysis focused on the importance of pocket gophers as a source of energy and nutrients, effects of weather and site features, and importance of pocket gophers to grizzly bears in the western contiguous United States prior to historical extirpations. Pocket gophers and their food caches were infrequent in grizzly bear feces, although foraging for pocket gophers accounted for about 20-25% of all grizzly bear feeding activity during April and May. Compared with roots individually excavated by bears, pocket gopher food caches were less digestible but more easily dug out. Exploitation of gopher food caches by grizzly bears was highly sensitive to site and weather conditions and peaked during and shortly after snowmelt. This peak coincided with maximum success by bears in finding pocket gopher food caches. Exploitation was most frequent and extensive on gently sloping nonforested sites with abundant spring beauty (Claytonia lanceolata) and yampah (Perdieridia gairdneri). Pocket gophers are rare in forests, and spring beauty and yampah roots are known to be important foods of both grizzly bears and burrowing rodents. Although grizzly bears commonly exploit pocket gophers only in the Yellowstone region, this behavior was probably widespread in mountainous areas of the western contiguous United States prior to extirpations of grizzly bears within the last 150 years.

  12. Efficient image data distribution and management with application to web caching architectures

    NASA Astrophysics Data System (ADS)

    Han, Keesook J.; Suter, Bruce W.

    2003-03-01

    We present compact image data structures and associated packet delivery techniques for effective Web caching architectures. Presently, images on a web page are inefficiently stored, using a single image per file. Our approach is to use clustering to merge similar images into a single file in order to exploit the redundancy between images. Our studies indicate that a 30-50% image data size reduction can be achieved by eliminating the redundancies of color indexes. Attached to this file is new metadata to permit an easy extraction of images. This approach will permit a more efficient use of the cache, since a shorter list of cache references will be required. Packet and transmission delays can be reduced by 50% eliminating redundant TCP/IP headers and connection time. Thus, this innovative paradigm for the elimination of redundancy may provide valuable benefits for optimizing packet delivery in IP networks by reducing latency and minimizing the bandwidth requirements.

  13. OS friendly microprocessor architecture: Hardware level computer security

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  14. NIC atomic operation unit with caching and bandwidth mitigation

    DOEpatents

    Hemmert, Karl Scott; Underwood, Keith D.; Levenhagen, Michael J.

    2016-03-01

    A network interface controller atomic operation unit and a network interface control method comprising, in an atomic operation unit of a network interface controller, using a write-through cache and employing a rate-limiting functional unit.

  15. Cache-site selection in Clark's Nutcracker (Nucifraga columbiana)

    Treesearch

    Teresa J. Lorenz; Kimberly A. Sullivan; Amanda V. Bakian; Carol A. Aubry

    2011-01-01

    Clark's Nutcracker (Nucifraga Columbiana) is one of the most specialized scatter-hoarding birds, considered a seed disperser for four species of pines (Pinus spp.), as well as an obligate coevolved mutualist of White bark Pine (P. albicaulis). Cache-site selection has not been formally studied in Clark...

  16. Caching strategies for improving performance of web-based Geographic applications

    NASA Astrophysics Data System (ADS)

    Liu, M.; Brodzik, M.; Collins, J. A.; Lewis, S.; Oldenburg, J.

    2012-12-01

    The NASA Operation IceBridge mission collects airborne remote sensing measurements to bridge the gap between NASA's Ice, Cloud and Land Elevation Satellite (ICESat) mission and the upcoming ICESat-2 mission. The IceBridge Data Portal from the National Snow and Ice Data Center provides an intuitive web interface for accessing IceBridge mission observations and measurements. Scientists and users usually do not have knowledge about the individual campaigns but are interested in data collected in a specific place. We have developed a high-performance map interface to allow users to quickly zoom to an area of interest and see any Operation IceBridge overflights. The map interface consists of two layers: the user can pan and zoom on the base map layer; the flight line layer that overlays the base layer provides all the campaign missions that intersect with the current map view. The user can click on the flight campaigns and download the data as needed. The OpenGIS® Web Map Service Interface Standard (WMS) provides a simple HTTP interface for requesting geo-registered map images from one or more distributed geospatial databases. Web Feature Service (WFS) provides an interface allowing requests for geographical features across the web using platform-independent calls. OpenLayers provides vector support (points, polylines and polygons) to build a WMS/WFS client for displaying both layers on the screen. Map Server, an open source development environment for building spatially enabled internet applications, is serving the WMS and WFS spatial data to OpenLayers. Early releases of the portal displayed unacceptably poor load time performance for flight lines and the base map tiles. This issue was caused by long response times from the map server in generating all map tiles and flight line vectors. We resolved the issue by implementing various caching strategies on top of the WMS and WFS services, including the use of Squid (www.squid-cache.org) to cache frequently-used content

  17. Advantages of masting in European beech: timing of granivore satiation and benefits of seed caching support the predator dispersal hypothesis.

    PubMed

    Zwolak, Rafał; Bogdziewicz, Michał; Wróbel, Aleksandra; Crone, Elizabeth E

    2016-03-01

    The predator satiation and predator dispersal hypotheses provide alternative explanations for masting. Both assume satiation of seed-eating vertebrates. They differ in whether satiation occurs before or after seed removal and caching by granivores (predator satiation and predator dispersal, respectively). This difference is largely unrecognized, but it is demographically important because cached seeds are dispersed and often have a microsite advantage over nondispersed seeds. We conducted rodent exclosure experiments in two mast and two nonmast years to test predictions of the predator dispersal hypothesis in our study system of yellow-necked mice (Apodemus flavicollis) and European beech (Fagus sylvatica). Specifically, we tested whether the fraction of seeds removed from the forest floor is similar during mast and nonmast years (i.e., lack of satiation before seed caching), whether masting decreases the removal of cached seeds (i.e., satiation after seed storage), and whether seed caching increases the probability of seedling emergence. We found that masting did not result in satiation at the seed removal stage. However, masting decreased the removal of cached seeds, and seed caching dramatically increased the probability of seedling emergence relative to noncached seeds. European beech thus benefits from masting through the satiation of scatterhoarders that occurs only after seeds are removed and cached. Although these findings do not exclude other evolutionary advantages of beech masting, they indicate that fitness benefits of masting extend beyond the most commonly considered advantages of predator satiation and increased pollination efficiency.

  18. Memory development in the second year: for events or locations?

    PubMed

    Russell, James; Thompson, Doreen

    2003-04-01

    We employed an object-placement/object-removal design, inspired by recent work on 'episodic-like' memory in scrub jays (Clayton, N. S., & Dickinson, A. (1998). Episodic-like memory during cache recovery by scrub jays. Nature, 395, 272-274), to examine the possibility that children in the second year of life have event-based memories. In one task, a successful search could have been due to the recall of an object-removal event. In the second task, a successful search could only have been caused by recall of where objects were located. Success was general in the oldest group of children (21-25 months), while performance was broadly similar on the two tasks. The parsimonious interpretation of this outcome is that the first task was performed by location memory, not by event memory. We place these data in the context of object permanence development.

  19. Neuropsychiatric symptoms as risk factors for progression from CIND to dementia: the Cache County Study.

    PubMed

    Peters, M E; Rosenberg, P B; Steinberg, M; Norton, M C; Welsh-Bohmer, K A; Hayden, K M; Breitner, J; Tschanz, J T; Lyketsos, C G

    2013-11-01

    To examine the association of neuropsychiatric symptom (NPS) severity with risk of transition to all-cause dementia, Alzheimer disease (AD), and vascular dementia (VaD). Survival analysis of time to dementia, AD, or VaD onset. Population-based study. 230 participants diagnosed with cognitive impairment, no dementia (CIND) from the Cache County Study of Memory Health and Aging were followed for a mean of 3.3 years. The Neuropsychiatric Inventory (NPI) was used to quantify the presence, frequency, and severity of NPS. Chi-squared statistics, t-tests, and Cox proportional hazard ratios were used to assess associations. The conversion rate from CIND to all-cause dementia was 12% per year, with risk factors including an APOE ε4 allele, lower Mini-Mental State Examination, lower 3MS, and higher CDR sum-of-boxes. The presence of at least one NPS was a risk factor for all-cause dementia, as was the presence of NPS with mild severity. Nighttime behaviors were a risk factor for all-cause dementia and of AD, whereas hallucinations were a risk factor for VaD. These data confirm that NPS are risk factors for conversion from CIND to dementia. Of special interest is that even NPS of mild severity are a risk for all-cause dementia or AD. Copyright © 2013 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  20. Neuropsychiatric symptoms as risk factors for progression from CIND to dementia: The Cache County Study

    PubMed Central

    Peters, ME; Rosenberg, PB; Steinberg, M; Norton, MC; Welsh-Bohmer, KA; Hayden, KM; Breitner, J; Tschanz, JT; CG, Lyketsos

    2012-01-01

    Objectives To examine the association of neuropsychiatric symptom (NPS) severity with risk of transition to all-cause dementia, Alzheimer's disease (AD), and vascular dementia (VaD). Design Survival analysis of time to dementia, AD, or VaD onset. Setting Population-based study. Participants 230 participants diagnosed with cognitive impairment, no dementia (CIND) from the Cache County Study of Memory Health and Aging were followed for a mean of 3.3 years. Measurements The Neuropsychiatric Inventory (NPI) was used to quantify the presence, frequency, and severity of NPS. Chi-square statistics, t-tests, and Cox proportional hazard ratios were used to assess associations. Results The conversion rate from CIND to all-cause dementia was 12% per year, with risk factors including an APOE ε4 allele, lower MMSE, lower 3MS, and higher CDR sum-of-boxes. The presence of at least one NPS was a risk factor for all-cause dementia, as was the presence of NPS with mild severity. Nighttime behaviors were a risk factor for all-cause dementia and of AD, while hallucinations were a risk factor for VaD. Conclusions These data confirm that NPS are risk factors for conversion from CIND to dementia. Of special interest is that even NPS of mild severity are a risk for all-cause dementia or AD. PMID:23567370

  1. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    NASA Astrophysics Data System (ADS)

    Yang, W.; Hanushevsky, A. B.; Mount, R. P.; Atlas Collaboration

    2014-06-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses several days of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  2. MEMORY FOR POETRY: MORE THAN MEANING?

    PubMed

    Atchley, Rachel M; Hare, Mary L

    The assumption has become that memory for words' sound patterns, or form, is rapidly lost in comparison to content. Memory for form is also assumed to be verbatim rather than schematic. Oral story-telling traditions suggest otherwise. The present experiment investigated if form can be remembered schematically in spoken poetry, a context in which form is important. We also explored if sleep could help preserve memory for form. We tested whether alliterative sound patterns could cue memory for poetry lines both immediately and after a delay of 12 hours that did or did not include sleep. Twelve alliterative poetry lines were modified into same alliteration, different alliteration, and no alliteration paraphrases. We predicted that memory for original poetry lines would be less accurate after 12 hours, same alliteration paraphrases would be falsely recognized as originals more often after 12 hours, and that the no-sleep group would make more errors. Different alliteration and no alliteration paraphrases were not expected to share this effect due to schematically different sound patterns. Our data support these hypotheses and provide evidence that memory for form is schematic in nature, retained in contexts in which form matters, and that sleep may help preserve memory for sound patterns.

  3. MEMORY FOR POETRY: MORE THAN MEANING?

    PubMed Central

    Atchley, Rachel M.; Hare, Mary L.

    2015-01-01

    The assumption has become that memory for words’ sound patterns, or form, is rapidly lost in comparison to content. Memory for form is also assumed to be verbatim rather than schematic. Oral story-telling traditions suggest otherwise. The present experiment investigated if form can be remembered schematically in spoken poetry, a context in which form is important. We also explored if sleep could help preserve memory for form. We tested whether alliterative sound patterns could cue memory for poetry lines both immediately and after a delay of 12 hours that did or did not include sleep. Twelve alliterative poetry lines were modified into same alliteration, different alliteration, and no alliteration paraphrases. We predicted that memory for original poetry lines would be less accurate after 12 hours, same alliteration paraphrases would be falsely recognized as originals more often after 12 hours, and that the no-sleep group would make more errors. Different alliteration and no alliteration paraphrases were not expected to share this effect due to schematically different sound patterns. Our data support these hypotheses and provide evidence that memory for form is schematic in nature, retained in contexts in which form matters, and that sleep may help preserve memory for sound patterns. PMID:26401226

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Zhang, Zhao

    With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less

  5. [Memory assessment by means of virtual reality: its present and future].

    PubMed

    Diaz-Orueta, Unai; Climent, Gema; Cardas-Ibanez, Jaione; Alonso, Laura; Olmo-Osa, Juan; Tirapu-Ustarroz, Javier

    2016-01-16

    The human memory is a complex cognitive system whose close relationship with executive functions implies that, in many occasions, a mnemonic deficit comprises difficulties to operate with correctly stored contents. Traditional memory tests, more focused in the information storage than in its processing, may be poorly sensitive both to subjects' daily life functioning and to changes originated by rehabilitation programs. In memory assessment, there is plenty evidence with regards to the need of improving it by means of tests which offer a higher ecological validity, with information that may be presented in various sensorial modalities and produced in a simultaneous way. Virtual reality reproduces three-dimensional environments with which the patient interacts in a dynamic way, with a sense of immersion in the environment similar to the presence and exposure to a real environment, and in which presentation of such stimuli, distractors and other variables may be systematically controlled. The current review aims to go deeply into the trajectory of neuropsychological assessment of memory based in virtual reality environments, making a tour through existing tests designed for assessing learning, prospective, episodic and spatial memory, as well as the most recent attempts to perform a comprehensive evaluation of all memory components.

  6. An evaluation of memory accuracy in food hoarding marsh tits Poecile palustris--how accurate are they compared to humans?

    PubMed

    Brodin, Anders; Urhan, A Utku

    2013-07-01

    Laboratory studies of scatter hoarding birds have become a model system for spatial memory studies. Considering that such birds are known to have a good spatial memory, recovery success in lab studies seems low. In parids (titmice and chickadees) typically ranging between 25 and 60% if five seeds are cached in 50-128 available caching sites. Since these birds store many thousands of food items in nature in one autumn one might expect that they should easily retrieve five seeds in a laboratory where they know the environment with its caching sites in detail. We designed a laboratory set up to be as similar as possible with previous studies and trained wild caught marsh tits Poecile palustris to store and retrieve in this set up. Our results agree closely with earlier studies, of the first ten looks around 40% were correct when the birds had stored five seeds in 100 available sites both 5 and 24h after storing. The cumulative success curve suggests high success during the first 15 looks where after it declines. Humans performed much better, in the first five looks most subjects were 100% correct. We discuss possible reasons for why the birds were not doing better. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  8. Hippocampal-prefrontal input supports spatial encoding in working memory.

    PubMed

    Spellman, Timothy; Rigotti, Mattia; Ahmari, Susanne E; Fusi, Stefano; Gogos, Joseph A; Gordon, Joshua A

    2015-06-18

    Spatial working memory, the caching of behaviourally relevant spatial cues on a timescale of seconds, is a fundamental constituent of cognition. Although the prefrontal cortex and hippocampus are known to contribute jointly to successful spatial working memory, the anatomical pathway and temporal window for the interaction of these structures critical to spatial working memory has not yet been established. Here we find that direct hippocampal-prefrontal afferents are critical for encoding, but not for maintenance or retrieval, of spatial cues in mice. These cues are represented by the activity of individual prefrontal units in a manner that is dependent on hippocampal input only during the cue-encoding phase of a spatial working memory task. Successful encoding of these cues appears to be mediated by gamma-frequency synchrony between the two structures. These findings indicate a critical role for the direct hippocampal-prefrontal afferent pathway in the continuous updating of task-related spatial information during spatial working memory.

  9. Ordering of guarded and unguarded stores for no-sync I/O

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2013-06-25

    A parallel computing system processes at least one store instruction. A first processor core issues a store instruction. A first queue, associated with the first processor core, stores the store instruction. A second queue, associated with a first local cache memory device of the first processor core, stores the store instruction. The first processor core updates first data in the first local cache memory device according to the store instruction. The third queue, associated with at least one shared cache memory device, stores the store instruction. The first processor core invalidates second data, associated with the store instruction, in the at least one shared cache memory. The first processor core invalidates third data, associated with the store instruction, in other local cache memory devices of other processor cores. The first processor core flushing only the first queue.

  10. Statistical Inference-Based Cache Management for Mobile Learning

    ERIC Educational Resources Information Center

    Li, Qing; Zhao, Jianmin; Zhu, Xinzhong

    2009-01-01

    Supporting efficient data access in the mobile learning environment is becoming a hot research problem in recent years, and the problem becomes tougher when the clients are using light-weight mobile devices such as cell phones whose limited storage space prevents the clients from holding a large cache. A practical solution is to store the cache…

  11. EMR Database Upgrade from MUMPS to CACHE: Lessons Learned.

    PubMed

    Alotaibi, Abduallah; Emshary, Mshary; Househ, Mowafa

    2014-01-01

    Over the past few years, Saudi hospitals have been implementing and upgrading Electronic Medical Record Systems (EMRs) to ensure secure data transfer and exchange between EMRs.This paper focuses on the process and lessons learned in upgrading the MUMPS database to a the newer Caché database to ensure the integrity of electronic data transfer within a local Saudi hospital. This paper examines the steps taken by the departments concerned, their action plans and how the change process was managed. Results show that user satisfaction was achieved after the upgrade was completed. The system was stable and offered better healthcare quality to patients as a result of the data exchange. Hardware infrastructure upgrades improved scalability and software upgrades to Caché improved stability. The overall performance was enhanced and new functions were added (CPOE) during the upgrades. The essons learned were: 1) Involve higher management; 2) Research multiple solutions available in the market; 3) Plan for a variety of implementation scenarios.

  12. Collaborative video caching scheme over OFDM-based long-reach passive optical networks

    NASA Astrophysics Data System (ADS)

    Li, Yan; Dai, Shifang; Chang, Xiangmao

    2018-07-01

    Long-reach passive optical networks (LR-PONs) are now considered as a desirable access solution for cost-efficiently delivering broadband services by integrating metro network with access network, among which orthogonal frequency division multiplexing (OFDM)-based LR-PONs gain greater research interests due to their good robustness and high spectrum efficiency. In such attractive OFDM-based LR-PONs, however, it is still challenging to effectively provide video service, which is one of the most popular and profitable broadband services, for end users. Given that more video requesters (i.e., end users) far away from optical line terminal (OLT) are served in OFDM-based LR-PONs, it is efficiency-prohibitive to use traditional video delivery model, which relies on the OLT to transmit videos to requesters, for providing video service, due to the model will incur not only larger video playback delay but also higher downstream bandwidth consumption. In this paper, we propose a novel video caching scheme that to collaboratively cache videos on distributed optical network units (ONUs) which are closer to end users, and thus to timely and cost-efficiently provide videos for requesters by ONUs over OFDM-based LR-PONs. We firstly construct an OFDM-based LR-PON architecture to enable the cooperation among ONUs while caching videos. Given a limited storage capacity of each ONU, we then propose collaborative approaches to cache videos on ONUs with the aim to maximize the local video hit ratio (LVHR), i.e., the proportion of video requests that can be directly satisfied by ONUs, under diverse resources requirements and requests distributions of videos. Simulations are finally conducted to evaluate the efficiency of our proposed scheme.

  13. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  14. Storageless and caching Tier-2 models in the UK context

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  15. Memory, Meaning, & Method: A View of Language Teaching. Second Edition.

    ERIC Educational Resources Information Center

    Stevick, Earl W.

    The revised second edition of a 1976 book explores the literature of research on memory, creation of meaning in language learning, and second language teaching methodology, incorporating results of recent work in those areas. Each of the 12 chapters begins with a series of questions to be addressed and ends with further questions. Chapter topics…

  16. The Caregiver Contribution to Heart Failure Self-Care (CACHS): Further Psychometric Testing of a Novel Instrument.

    PubMed

    Buck, Harleah G; Harkness, Karen; Ali, Muhammad Usman; Carroll, Sandra L; Kryworuchko, Jennifer; McGillion, Michael

    2017-04-01

    Caregivers (CGs) contribute important assistance with heart failure (HF) self-care, including daily maintenance, symptom monitoring, and management. Until CGs' contributions to self-care can be quantified, it is impossible to characterize it, account for its impact on patient outcomes, or perform meaningful cost analyses. The purpose of this study was to conduct psychometric testing and item reduction on the recently developed 34-item Caregiver Contribution to Heart Failure Self-care (CACHS) instrument using classical and item response theory methods. Fifty CGs (mean age 63 years ±12.84; 70% female) recruited from a HF clinic completed the CACHS in 2014 and results evaluated using classical test theory and item response theory. Items would be deleted for low (<.05) or high (>.95) endorsement, low (<.3) or high (>.7) corrected item-total correlations, significant pairwise correlation coefficients, floor or ceiling effects, relatively low latent trait and item information function levels (<1.5 and p > .5), and differential item functioning. After analysis, 14 items were excluded, resulting in a 20-item instrument (self-care maintenance eight items; monitoring seven items; and management five items). Most items demonstrated moderate to high discrimination (median 2.13, minimum .77, maximum 5.05), and appropriate item difficulty (-2.7 to 1.4). Internal consistency reliability was excellent (Cronbach α = .94, average inter-item correlation = .41) with no ceiling effects. The newly developed 20-item version of the CACHS is supported by rigorous instrument development and represents a novel instrument to measure CGs' contribution to HF self-care. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Ecosystem services from keystone species: diversionary seeding and seed-caching desert rodents can enhance Indian ricegrass seedling establishment

    USGS Publications Warehouse

    Longland, William; Ostoja, Steven M.

    2013-01-01

    Seeds of Indian ricegrass (Achnatherum hymenoides), a native bunchgrass common to sandy soils on arid western rangelands, are naturally dispersed by seed-caching rodent species, particularly Dipodomys spp. (kangaroo rats). These animals cache large quantities of seeds when mature seeds are available on or beneath plants and recover most of their caches for consumption during the remainder of the year. Unrecovered seeds in caches account for the vast majority of Indian ricegrass seedling recruitment. We applied three different densities of white millet (Panicum miliaceum) seeds as “diversionary foods” to plots at three Great Basin study sites in an attempt to reduce rodents' over-winter cache recovery so that more Indian ricegrass seeds would remain in soil seedbanks and potentially establish new seedlings. One year after diversionary seed application, a moderate level of Indian ricegrass seedling recruitment occurred at two of our study sites in western Nevada, although there was no recruitment at the third site in eastern California. At both Nevada sites, the number of Indian ricegrass seedlings sampled along transects was significantly greater on all plots treated with diversionary seeds than on non-seeded control plots. However, the density of diversionary seeds applied to plots had a marginally non-significant effect on seedling recruitment, and it was not correlated with recruitment patterns among plots. Results suggest that application of a diversionary seed type that is preferred by seed-caching rodents provides a promising passive restoration strategy for target plant species that are dispersed by these rodents.

  18. Store-operate-coherence-on-value

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Heidelberger, Philip; Kumar, Sameer

    A system, method and computer program product for performing various store-operate instructions in a parallel computing environment that includes a plurality of processors and at least one cache memory device. A queue in the system receives, from a processor, a store-operate instruction that specifies under which condition a cache coherence operation is to be invoked. A hardware unit in the system runs the received store-operate instruction. The hardware unit evaluates whether a result of the running the received store-operate instruction satisfies the condition. The hardware unit invokes a cache coherence operation on a cache memory address associated with the receivedmore » store-operate instruction if the result satisfies the condition. Otherwise, the hardware unit does not invoke the cache coherence operation on the cache memory device.« less

  19. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  20. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  1. Memory, mental time travel and The Moustachio Quartet

    PubMed Central

    Wilkins, Clive

    2017-01-01

    Mental time travel allows us to revisit our memories and imagine future scenarios, and this is why memories are not only about the past, but they are also prospective. These episodic memories are not a fixed store of what happened, however, they are reassessed each time they are revisited and depend on the sequence in which events unfold. In this paper, we shall explore the complex relationships between memory and human experience, including through a series of novels ‘The Moustachio Quartet’ that can be read in any order. To do so, we shall integrate evidences from science and the arts to explore the subjective nature of memory and mental time travel, and argue that it has evolved primarily for prospection as opposed to retrospection. Furthermore, we shall question the notion that mental time travel is a uniquely human construct, and argue that some of the best evidence for the evolution of mental time travel comes from our distantly related cousins, the corvids, that cache food for the future and rely on long-lasting and highly accurate memories of what, where and when they stored their stashes of food. PMID:28479980

  2. Memory, mental time travel and The Moustachio Quartet.

    PubMed

    Clayton, Nicola; Wilkins, Clive

    2017-06-06

    Mental time travel allows us to revisit our memories and imagine future scenarios, and this is why memories are not only about the past, but they are also prospective. These episodic memories are not a fixed store of what happened, however, they are reassessed each time they are revisited and depend on the sequence in which events unfold. In this paper, we shall explore the complex relationships between memory and human experience, including through a series of novels 'The Moustachio Quartet' that can be read in any order. To do so, we shall integrate evidences from science and the arts to explore the subjective nature of memory and mental time travel, and argue that it has evolved primarily for prospection as opposed to retrospection. Furthermore, we shall question the notion that mental time travel is a uniquely human construct, and argue that some of the best evidence for the evolution of mental time travel comes from our distantly related cousins, the corvids, that cache food for the future and rely on long-lasting and highly accurate memories of what, where and when they stored their stashes of food.

  3. Study on data acquisition system based on reconfigurable cache technology

    NASA Astrophysics Data System (ADS)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  4. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    NASA Astrophysics Data System (ADS)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  5. Fox Squirrels Match Food Assessment and Cache Effort to Value and Scarcity

    PubMed Central

    Delgado, Mikel M.; Nicholas, Molly; Petrie, Daniel J.; Jacobs, Lucia F.

    2014-01-01

    Scatter hoarders must allocate time to assess items for caching, and to carry and bury each cache. Such decisions should be driven by economic variables, such as the value of the individual food items, the scarcity of these items, competition for food items and risk of pilferage by conspecifics. The fox squirrel, an obligate scatter-hoarder, assesses cacheable food items using two overt movements, head flicks and paw manipulations. These behaviors allow an examination of squirrel decision processes when storing food for winter survival. We measured wild squirrels' time allocations and frequencies of assessment and investment behaviors during periods of food scarcity (summer) and abundance (fall), giving the squirrels a series of 15 items (alternating five hazelnuts and five peanuts). Assessment and investment per cache increased when resource value was higher (hazelnuts) or resources were scarcer (summer), but decreased as scarcity declined (end of sessions). This is the first study to show that assessment behaviors change in response to factors that indicate daily and seasonal resource abundance, and that these factors may interact in complex ways to affect food storing decisions. Food-storing tree squirrels may be a useful and important model species to understand the complex economic decisions made under natural conditions. PMID:24671221

  6. Killing and caching of an adult White-tailed deer, Odocoileus virginianus, by a single Gray Wolf, Canis lupus

    USGS Publications Warehouse

    Nelson, Michael E.

    2011-01-01

    A single Gray Wolf (Canis lupus) killed an adult male White-tailed Deer (Odocoileus virginianus) and cached the intact carcass in 76 cm of snow. The carcass was revisited and entirely consumed between four and seven days later. This is the first recorded observation of a Gray Wolf caching an entire adult deer.

  7. A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks

    PubMed Central

    Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung

    2015-01-01

    Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665

  8. Use of the sun as a heading indicator when caching and recovering in a wild rodent

    PubMed Central

    Samson, Jamie; Manser, Marta B.

    2016-01-01

    A number of diurnal species have been shown to use directional information from the sun to orientate. The use of the sun in this way has been suggested to occur in either a time-dependent (relying on specific positional information) or a time-compensated manner (a compass that adjusts itself over time with the shifts in the sun’s position). However, some interplay may occur between the two where a species could also use the sun in a time-limited way, whereby animals acquire certain information about the change of position, but do not show full compensational abilities. We tested whether Cape ground squirrels (Xerus inauris) use the sun as an orientation marker to provide information for caching and recovery. This species is a social sciurid that inhabits arid, sparsely vegetated habitats in Southern Africa, where the sun is nearly always visible during the diurnal period. Due to the lack of obvious landmarks, we predicted that they might use positional cues from the sun in the sky as a reference point when caching and recovering food items. We provide evidence that Cape ground squirrels use information from the sun’s position while caching and reuse this information in a time-limited way when recovering these caches. PMID:27580797

  9. Two-level main memory co-design: Multi-threaded algorithmic primitives, analysis, and simulation

    DOE PAGES

    Bender, Michael A.; Berry, Jonathan W.; Hammond, Simon D.; ...

    2017-01-03

    A challenge in computer architecture is that processors often cannot be fed data from DRAM as fast as CPUs can consume it. Therefore, many applications are memory-bandwidth bound. With this motivation and the realization that traditional architectures (with all DRAM reachable only via bus) are insufficient to feed groups of modern processing units, vendors have introduced a variety of non-DDR 3D memory technologies (Hybrid Memory Cube (HMC),Wide I/O 2, High Bandwidth Memory (HBM)). These offer higher bandwidth and lower power by stacking DRAM chips on the processor or nearby on a silicon interposer. We will call these solutions “near-memory,” andmore » if user-addressable, “scratchpad.” High-performance systems on the market now offer two levels of main memory: near-memory on package and traditional DRAM further away. In the near term we expect the latencies near-memory and DRAM to be similar. Here, it is natural to think of near-memory as another module on the DRAM level of the memory hierarchy. Vendors are expected to offer modes in which the near memory is used as cache, but we believe that this will be inefficient.« less

  10. EarthCache as a Tool to Promote Earth-Science in Public School Classrooms

    NASA Astrophysics Data System (ADS)

    Gochis, E. E.; Rose, W. I.; Klawiter, M.; Vye, E. C.; Engelmann, C. A.

    2011-12-01

    Geoscientists often find it difficult to bridge the gap in communication between university research and what is learned in the public schools. Today's schools operate in a high stakes environment that only allow instruction based on State and National Earth Science curriculum standards. These standards are often unknown by academics or are written in a style that obfuscates the transfer of emerging scientific research to students in the classroom. Earth Science teachers are in an ideal position to make this link because they have a background in science as well as a solid understanding of the required curriculum standards for their grade and the pedagogical expertise to pass on new information to their students. As part of the Michigan Teacher Excellence Program (MiTEP), teachers from Grand Rapids, Kalamazoo, and Jackson school districts participate in 2 week field courses with Michigan Tech University to learn from earth science experts about how the earth works. This course connects Earth Science Literacy Principles' Big Ideas and common student misconceptions with standards-based education. During the 2011 field course, we developed and began to implement a three-phase EarthCache model that will provide a geospatial interactive medium for teachers to translate the material they learn in the field to the students in their standards based classrooms. MiTEP participants use GPS and Google Earth to navigate to Michigan sites of geo-significance. At each location academic experts aide participants in making scientific observations about the locations' geologic features, and "reading the rocks" methodology to interpret the area's geologic history. The participants are then expected to develop their own EarthCache site to be used as pedagogical tool bridging the gap between standards-based classroom learning, contemporary research and unique outdoor field experiences. The final phase supports teachers in integrating inquiry based, higher-level learning student

  11. Security in the CernVM File System and the Frontier Distributed Database Caching System

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  12. Hydrologic data for the Cache Creek-Bear Thrust environmental impact statement near Jackson, Wyoming

    USGS Publications Warehouse

    Craig, G.S.; Ringen, B.H.; Cox, E.R.

    1981-01-01

    Information on the quantity and quality of surface and ground water in an area of concern for the Cache Creek-Bear Thrust Environmental Impact Statement in northwestern Wyoming is presented without interpretation. The environmental impact statement is being prepared jointly by the U.S. Geological Survey and the U.S. Forest Service and concerns proposed exploration and development of oil and gas on leased Federal land near Jackson, Wyoming. Information includes data from a gaging station on Cache Creek and from wells, springs, and miscellaneous sites on streams. Data include streamflow, chemical and suspended-sediment quality of streams, and the occurrence and chemical quality of ground water. (USGS)

  13. Education for sustainability and environmental education in National Geoparks. EarthCaching - a new method?

    NASA Astrophysics Data System (ADS)

    Zecha, Stefanie; Regelous, Anette

    2017-04-01

    National Geoparks are restricted areas incorporating educational resources of great importance in promoting education for sustainable development, mobilizing knowledge inherent to the EarthSciences. Different methods can be used to implement the education of sustainability. Here we present possibilities for National Geoparks to support sustainability focusing on new media and EarthCaches based on the data set of the "EarthCachers International EarthCaching" conference in Goslar in October 2015. Using an empirical study designed by ourselves we collected actual information about the environmental consciousness of Earthcachers. The data set was analyzed using SPSS and statistical methods. Here we present the results and their consequences for National Geoparks.

  14. Federated or cached searches: Providing expected performance from multiple invasive species databases

    NASA Astrophysics Data System (ADS)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  15. Experimental evidence for a novel mechanism driving variation in habitat quality in a food-caching bird.

    PubMed

    Strickland, Dan; Kielstra, Brian; Ryan Norris, D

    2011-12-01

    Variation in habitat quality can have important consequences for fitness and population dynamics. For food-caching species, a critical determinant of habitat quality is normally the density of storable food, but it is also possible that quality is driven by the ability of habitats to preserve food items. The food-caching gray jay (Perisoreus canadensis) occupies year-round territories in the coniferous boreal and subalpine forests of North America, but does not use conifer seed crops as a source of food. Over the last 33 years, we found that the occupancy rate of territories in Algonquin Park (ON, Canada) has declined at a higher rate in territories with a lower proportion of conifers compared to those with a higher proportion. Individuals occupying territories with a low proportion of conifers were also less likely to successfully fledge young. Using chambers to simulate food caches, we conducted an experiment to examine the hypothesis that coniferous trees are better able to preserve the perishable food items stored in summer and fall than deciduous trees due to their antibacterial and antifungal properties. Over a 1-4 month exposure period, we found that mealworms, blueberries, and raisins all lost less weight when stored on spruce and pine trees compared to deciduous and other coniferous trees. Our results indicate a novel mechanism to explain how habitat quality may influence the fitness and population dynamics of food-caching animals, and has important implications for understanding range limits for boreal breeding animals.

  16. Wolves, Canis lupus, carry and cache the collars of radio-collared White-tailed Deer, Odocoileus virginianus, they killed

    USGS Publications Warehouse

    Nelson, Michael E.; Mech, L. David

    2011-01-01

    Wolves (Canis lupus) in northeastern Minnesota cached six radio-collars (four in winter, two in spring-summer) of 202 radio-collared White-tailed Deer (Odocoileus virginianus) they killed or consumed from 1975 to 2010. A Wolf bedded on top of one collar cached in snow. We found one collar each at a Wolf den and Wolf rendezvous site, 2.5 km and 0.5 km respectively, from each deer's previous locations.

  17. 76 FR 14372 - Uinta-Wasatch-Cache National Forest Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-16

    ... Street, Salt Lake City, Utah. Written comments should be sent to Loyal Clark, Uinta-Wasatch-Cache... open to the public. The following business will be conducted: (1) Review Forest Service project approval letter, (2) discuss travel budget, and (3) review new proposals. Persons who wish to bring related...

  18. Living Memorials: Understanding the Social Meanings of Community-Based Memorials to September 11, 2001

    Treesearch

    Erika S. Svendsen; Lindsay K. Campbell

    2010-01-01

    Living memorials are landscaped spaces created by people to memorialize individuals, places, and events. Hundreds of stewardship groups across the United States of America created living memorials in response to the September 11, 2001 terrorist attacks. This study sought to understand how stewards value, use, and talk about their living, community-based memorials....

  19. Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park

    ERIC Educational Resources Information Center

    McEuen, Amy B.; Steele, Michael A.

    2012-01-01

    We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…

  20. Federated or cached searches: providing expected performance from multiple invasive species databases

    USGS Publications Warehouse

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  1. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.

    PubMed

    Zheng, Da; Burns, Randal; Szalay, Alexander S

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.

  2. Presenilin E318G variant and Alzheimer's disease risk: the Cache County study.

    PubMed

    Hippen, Ariel A; Ebbert, Mark T W; Norton, Maria C; Tschanz, JoAnn T; Munger, Ronald G; Corcoran, Christopher D; Kauwe, John S K

    2016-06-29

    Alzheimer's disease is the leading cause of dementia in the elderly and the third most common cause of death in the United States. A vast number of genes regulate Alzheimer's disease, including Presenilin 1 (PSEN1). Multiple studies have attempted to locate novel variants in the PSEN1 gene that affect Alzheimer's disease status. A recent study suggested that one of these variants, PSEN1 E318G (rs17125721), significantly affects Alzheimer's disease status in a large case-control dataset, particularly in connection with the APOEε4 allele. Our study looks at the same variant in the Cache County Study on Memory and Aging, a large population-based dataset. We tested for association between E318G genotype and Alzheimer's disease status by running a series of Fisher's exact tests. We also performed logistic regression to test for an additive effect of E318G genotype on Alzheimer's disease status and for the existence of an interaction between E318G and APOEε4. In our Fisher's exact test, it appeared that APOEε4 carriers with an E318G allele have slightly higher risk for AD than those without the allele (3.3 vs. 3.8); however, the 95 % confidence intervals of those estimates overlapped completely, indicating non-significance. Our logistic regression model found a positive but non-significant main effect for E318G (p = 0.895). The interaction term between E318G and APOEε4 was also non-significant (p = 0.689). Our findings do not provide significant support for E318G as a risk factor for AD in APOEε4 carriers. Our calculations indicated that the overall sample used in the logistic regression models was adequately powered to detect the sort of effect sizes observed previously. However, the power analyses of our Fisher's exact tests indicate that our partitioned data was underpowered, particularly in regards to the low number of E318G carriers, both AD cases and controls, in the Cache county dataset. Thus, the differences in types of datasets used may help to

  3. Testing episodic memory in animals: a new approach.

    PubMed

    Griffiths, D P; Clayton, N S

    2001-08-01

    Episodic memory involves the encoding and storage of memories concerned with unique personal experiences and their subsequent recall, and it has long been the subject of intensive investigation in humans. According to Tulving's classical definition, episodic memory "receives and stores information about temporally dated episodes or events and temporal-spatial relations among these events." Thus, episodic memory provides information about the 'what' and 'when' of events ('temporally dated experiences') and about 'where' they happened ('temporal-spatial relations'). The storage and subsequent recall of this episodic information was thought to be beyond the memory capabilities of nonhuman animals. Although there are many laboratory procedures for investigating memory for discrete past episodes, until recently there were no previous studies that fully satisfied the criteria of Tulving's definition: they can all be explained in much simpler terms than episodic memory. However, current studies of memory for cache sites in food-storing jays provide an ethologically valid model for testing episodic-like memory in animals, thereby bridging the gap between human and animal studies memory. There is now a pressing need to adapt these experimental tests of episodic memory for other animals. Given the potential power of transgenic and knock-out procedures for investigating the genetic and molecular bases of learning and memory in laboratory rodents, not to mention the wealth of knowledge about the neuroanatomy and neurophysiology of the rodent hippocampus (a brain area heavily implicated in episodic memory), an obvious next step is to develop a rodent model of episodic-like memory based on the food-storing bird paradigm. The development of a rodent model system could make an important contribution to our understanding of the neural, molecular, and behavioral mechanisms of mammalian episodic memory.

  4. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    NASA Astrophysics Data System (ADS)

    Brun, R.; Duellmann, D.; Ganis, G.; Hanushevsky, A.; Janyst, L.; Peters, A. J.; Rademakers, F.; Sindrilaru, E.

    2011-12-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  5. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, R.; Dullmann, D.; Ganis, G.

    2012-04-19

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyze the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with amore » discussion of the potential role of this new component at the different tiers of a distributed computing grid.« less

  6. Sample Acquisition and Caching architecture for the Mars Sample Return mission

    NASA Astrophysics Data System (ADS)

    Zacny, K.; Chu, P.; Cohen, J.; Paulsen, G.; Craft, J.; Szwarc, T.

    This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “ One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “ eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.

  7. Geo-Caching: Place-Based Discovery of Virginia State Parks and Museums

    ERIC Educational Resources Information Center

    Gray, Howard Richard

    2007-01-01

    The use of Global Positioning Systems (GPS) units has exploded in recent years along with the computer technology to access this data-based information. Geo-caching is an exciting game using GPS that provides place-based information regarding the public lands, facilities and cultural heritage programs within the Virginia Parks and Museum system.…

  8. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware

    PubMed Central

    Zheng, Da; Burns, Randal; Szalay, Alexander S.

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052

  9. Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bolosky, William Joseph

    1993-01-01

    Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.

  10. Blackcomb: Hardware-Software Co-design for Non-Volatile Memory in Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiber, Robert

    crosspoint ReRAM array [Niu 2012b]. We have conducted an in depth analysis of the circuit and system level design implications of multi-layer cross-point Resistive RAM (MLCReRAM) from performance, power and reliability perspectives [Xu 2013]. The objective of this study is to understand the design trade-offs of this technology with respect to the MLC Phase Change Memory (MLCPCM).Our MLC ReRAM design at the circuit and system levels indicates that different resistance allocation schemes, programming strategies, peripheral designs, and material selections profoundly affect the area, latency, power, and reliability of MLC ReRAM. Based on this analysis, we conduct two case studies: first we compare MLC ReRAM design against MLC phase-change memory (PCM) and multi-layer cross-point ReRAM design, and point out why multi-level ReRAM is appealing; second we further explore the design space for MLC ReRAM. Architecture and Application We explored hybrid checkpointing using phase-change memory for future exascale systems [Dong 2011] and showed that the use of nonvolatile memory for local checkpointing significantly increases the number of faults covered by local checkpoints and reduces the probability of a global failure in the middle of a global checkpoint to less than 1%. We also proposed a technique called i2WAP to mitigate the write variations in NVM-based last-level cache for the improvement of the NVM lifetime [Wang 2013]. Our wear leveling technique attempts to work around the limitations of write endurance by arranging data access so that write operations can be distributed evenly across all the storage cells. During our intensive research on fault-tolerant NVM design, we found that ECC cannot effectively tolerate hard errors from limited write endurance and process imperfection. Therefore, we devised a novel Point and Discard (PAD) architecture in in [ 2012] as a hard-error-tolerant architecture for ReRAM-based Last Level Caches. PAD improves the lifetime of ReRAM caches by 1

  11. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A B; de Supinski, B; Mueller, F

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less

  12. Historical Channel Changes in Cache Creek, Capay Valley, California

    NASA Astrophysics Data System (ADS)

    Higgins, S. A.; Kamman, G. R.

    2009-12-01

    Historical channel changes were assessed for the 21-mile segment of Cache Creek through Capay Valley in order to evaluate temporal changes in stream channel morphology. The Capay Valley segment of Cache Creek is primarily a low-gradient channel with a gravel/cobble substrate. Hydrologic conditions have been affected by dam operations that store runoff during the wet season and deliver water during the dry season for downstream irrigation uses. Widespread distribution of invasive plant species has altered the condition of the riparian corridor. The assessment evaluated a hypothesis that historical changes in hydrology and vegetation cover have triggered changes in geomorphic conditions. Historic channel alignments were digitized to assess planform channel adjustments. Results illustrate a dynamic system with frequent channel movements throughout the historic period. Evaluation of longitudinal channel adjustments revealed a relatively stable bed surface elevation since the 1930’s. Comparisons of cross-sectional channel geometry for topographic profiles surveyed in 1984 were compared to equivalent features in a LiDAR survey from 2008. The comparisons show a relatively consistent channel geometry that has maintained a similar form despite rather large planform adjustments with areas of bank retreat in excess of 500 feet. Results suggest that the study reach has maintained a relatively stable morphology through a series of dynamic planform adjustments during the historic period.

  13. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  14. False memory for idiomatic expressions in younger and older adults: evidence for indirect activation of figurative meanings

    PubMed Central

    Coane, Jennifer H.; Sánchez-Gutiérrez, Claudia; Stillman, Chelsea M.; Corriveau, Jennifer A.

    2014-01-01

    Idiomatic expressions can be interpreted literally or figuratively. These two meanings are often processed in parallel or very rapidly, as evidenced by online measures of idiomatic processing. Because in many cases the figurative meaning cannot be derived from the component lexical elements and because of the speed with which this meaning is accessed, it is assumed such meanings are stored in semantic memory. In the present study, we examined how literal equivalents and intact idiomatic expressions are stored in memory and whether episodic memory traces interact or interfere with semantic-level representations and vice versa. To examine age-invariance, younger and older adults studied lists of idioms and literal equivalents. On a recognition test, some studied items were presented in the alternative form (e.g., if the idiom was studied, its literal equivalent was tested). False alarms to these critical items suggested that studying literal equivalents activates the idiom from which they are derived, presumably due to spreading activation in lexical/semantic networks, and results in high rates of errors. Importantly, however, the converse (false alarms to literal equivalents after studying the idiom) were significantly lower, suggesting an advantage in storage for idioms. The results are consistent with idiom processing models that suggest obligatory access to figurative meanings and that this access can also occur indirectly, through literal equivalents. PMID:25101030

  15. False memory for idiomatic expressions in younger and older adults: evidence for indirect activation of figurative meanings.

    PubMed

    Coane, Jennifer H; Sánchez-Gutiérrez, Claudia; Stillman, Chelsea M; Corriveau, Jennifer A

    2014-01-01

    Idiomatic expressions can be interpreted literally or figuratively. These two meanings are often processed in parallel or very rapidly, as evidenced by online measures of idiomatic processing. Because in many cases the figurative meaning cannot be derived from the component lexical elements and because of the speed with which this meaning is accessed, it is assumed such meanings are stored in semantic memory. In the present study, we examined how literal equivalents and intact idiomatic expressions are stored in memory and whether episodic memory traces interact or interfere with semantic-level representations and vice versa. To examine age-invariance, younger and older adults studied lists of idioms and literal equivalents. On a recognition test, some studied items were presented in the alternative form (e.g., if the idiom was studied, its literal equivalent was tested). False alarms to these critical items suggested that studying literal equivalents activates the idiom from which they are derived, presumably due to spreading activation in lexical/semantic networks, and results in high rates of errors. Importantly, however, the converse (false alarms to literal equivalents after studying the idiom) were significantly lower, suggesting an advantage in storage for idioms. The results are consistent with idiom processing models that suggest obligatory access to figurative meanings and that this access can also occur indirectly, through literal equivalents.

  16. 78 FR 2655 - Uinta-Wasatch-Cache National Forest; Utah; Ogden Travel Plan Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-14

    ...-Wasatch-Cache National Forest; Utah; Ogden Travel Plan Project AGENCY: Forest Service, USDA. ACTION... prepare a supplement to the Ogden Travel Plan Revision Final Supplemental Environmental Impact Statement (FSEIS). The Ogden Travel Plan Revision FSEIS evaluated six alternatives for possible travel management...

  17. Epidemiology of apathy in older adults: the Cache County Study.

    PubMed

    Onyike, Chiadi U; Sheppard, Jeannie-Marie E; Tschanz, JoAnn T; Norton, Maria C; Green, Robert C; Steinberg, Martin; Welsh-Bohmer, Kathleen A; Breitner, John C; Lyketsos, Constantine G

    2007-05-01

    The objectives of this study are to describe the distribution of apathy in community-based older adults and to investigate its relationships with cognition and day-to-day functioning. Data from the Cache County Study on Memory, Health and Aging were used to estimate the frequency of apathy in groups of elders defined by demographic, cognitive, and functional status and to examine the associations of apathy with impairments of cognition and day-to-day functioning. Apathy was measured with the Neuropsychiatric Inventory. Clinical apathy (Neuropsychiatric Inventory score > or = 4) was found in 1.4% of individuals classified as cognitively normal, 3.1% of those with a mild cognitive syndrome, and 17.3% of those with dementia. Apathy status was associated with cognitive and functional impairments and higher levels of stress experienced by caregivers. Among participants with normal cognition, apathy was associated with worse performance on the Mini-Mental State Examination, the Boston Naming and Animal Fluency tests, and the Trail Making Test-Part B. The association of apathy with cognitive impairment was independent of its association with Neuropsychiatric Inventory depression. In a cohort of community-based older adults, the frequency and severity of apathy is positively correlated with the severity of cognitive impairment. In addition, apathy is associated with cognitive and functional impairments in elders adjudged to have normal cognition. The results suggest that apathy is an early sign of cognitive decline and that delineating phenotypes in which apathy and a mild cognitive syndrome co-occur may facilitate earlier identification of individuals at risk for dementia.

  18. Memory persistency and nonlinearity in daily mean dew point across India

    NASA Astrophysics Data System (ADS)

    Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar

    2016-04-01

    Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.

  19. Looking for episodic memory in animals and young children: prospects for a new minimalism.

    PubMed

    Clayton, Nicola S; Russell, James

    2009-09-01

    Because animals and young children cannot be interrogated about their experiences it is difficult to conduct research into their episodic memories. The approach to this issue adopted by Clayton and Dickinson [Clayton, N. S., & Dickinson, A. (1998). Episodic-like memory during cache recovery by scrub jays. Nature, 395, 272-274] was to take a conceptually minimalist definition of episodic memory, in terms of integrating information about what was done where and when [Tulving, E. (1972). Episodic and semantic memory. In E. Tulving, & W. Donaldson (Eds.), Organisation of memory (pp. 381-403). New York: Academic Press], and to refer to such memories as 'episodic-like'. Some claim, however, that because animals supposedly lack the conceptual abilities necessary for episodic recall one should properly call these memories 'semantic'. We address this debate with a novel approach to episodic memory, which is minimalist insofar as it focuses on the non-conceptual content of a re-experienced situation. It rests on Kantian assumptions about the necessary 'perspectival' features of any objective experience or re-experience. We show how adopting this perspectival approach can render an episodic interpretation of the animal data more plausible and can also reveal patterns in the mosaic of developmental evidence for episodic memory in humans.

  20. Optimizing transformations of stencil operations for parallel cache-based architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassetti, F.; Davis, K.

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less

  1. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  2. Optimizing raid performance with cache

    NASA Technical Reports Server (NTRS)

    Bouzari, Alex

    1994-01-01

    We live in a world of increasingly complex applications and operating systems. Information is increasing at a mind-boggling rate. The consolidation of text, voice, and imaging represents an even greater challenge for our information systems. Which forced us to address three important questions: Where do we store all this information? How do we access it? And, how do we protect it against the threat of loss or damage? Introduced in the 1980s, RAID (Redundant Arrays of Independent Disks) represents a cost-effective solution to the needs of the information age. While fulfilling expectations for high storage, and reliability, RAID is sometimes subject to criticisms in the area of performance. However, there are design elements that can significantly enhance performance. They can be subdivided into two areas: (1) RAID levels or basic architecture. And, (2) enhancement schemes such as intelligent caching, support of tagged command queuing, and use of SCSI-2 Fast and Wide features.

  3. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration

    2014-06-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  4. Technical Report Interchange Through Synchronized OAI Caches

    NASA Technical Reports Server (NTRS)

    Liu, Xiaming; Maly, Kurt; Zubair, Mohammad; Tang, Rong; Padshah, Mohammad Imran; Roncaglia, George; Rocker, JoAnne; Nelson, Michael; vonOfenheim, William; Luce, Richard

    2002-01-01

    The Technical Report Interchange project is a cooperative experimental effort between NASA Langley Research Center, Los Alamos National Laboratory, Air Force Research Laboratory, Sandia National Laboratory and Old Dominion University to allow for the integration of technical reports. This is accomplished using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and having each site cache the metadata from the other participating sites. Each site also implements additional software to ingest the OAI-PMH harvested metadata into their native digital library (DL). This allows the users at each site to see an increased technical report collection through the familiar DL interfaces and tale advantage of whatever valued added are provided by the native DL.

  5. What-where-when memory in magpies (Pica pica).

    PubMed

    Zinkivskay, Ann; Nazir, Farrah; Smulders, Tom V

    2009-01-01

    Some animals have been shown to be able to remember which type of food they hoarded or encountered in which location and how long ago (what-where-when memory). In this study, we test whether magpies (Pica pica) also show evidence of remembering these different aspects of a past episode. Magpies hid red- and blue-dyed pellets of scrambled eggs in a large tray containing wood shavings. They were allowed to make as many caches as they wanted. The birds were then returned either the same day or the next day to retrieve the pellets. If they returned the same day, one colour of pellets was replaced with wooden beads of similar size and colour, while if they returned the next day this would happen to the other colour. Over just a few trials, the birds learned to only search for the food pellets, and ignore the beads, of the appropriate colour for the given retention interval. A probe trial in which all items were removed showed that the birds persisted in searching for the pellets and not the beads. This shows that magpies can remember which food item they hoarded where, and when, even if the food items only differ from each other in their colour and are dispersed throughout a continuous caching substrate.

  6. Antioxidant intake and cognitive function of elderly men and women: the Cache County Study.

    PubMed

    Wengreen, H J; Munger, R G; Corcoran, C D; Zandi, P; Hayden, K M; Fotuhi, M; Skoog, I; Norton, M C; Tschanz, J; Breitner, J C S; Welsh-Bohmer, K A

    2007-01-01

    We prospectively examined associations between intakes of antioxidants (vitamins C, vitamin E, and carotene) and cognitive function and decline among elderly men and women of the Cache County Study on Memory and Aging in Utah. In 1995, 3831 residents 65 years of age or older completed a baseline survey that included a food frequency questionnaire and cognitive assessment. Cognitive function was assessed using an adapted version of the Modified Mini-Mental State examination (3MS) at baseline and at three subsequent follow-up interviews spanning approximately 7 years. Multivariable-mixed models were used to estimate antioxidant nutrient effects on average 3MS score over time. Increasing quartiles of vitamin C intake alone and combined with vitamin E were associated with higher baseline average 3MS scores (p-trend = 0.013 and 0.02 respectively); this association appeared stronger for food sources compared to supplement or food and supplement sources combined. Study participants with lower levels of intake of vitamin C, vitamin E and carotene had a greater acceleration of the rate of 3MS decline over time compared to those with higher levels of intake. High antioxidant intake from food and supplement sources of vitamin C, vitamin E, and carotene may delay cognitive decline in the elderly.

  7. Can seed-caching enhance seedling survival of Indian ricegrass (Achnatherum hymenoides) through intraspecific facilitation?

    USDA-ARS?s Scientific Manuscript database

    Positive interactions among individual plants (facilitation) may often enhance seedling survival in stressful environments. Many granivorous small mammal species cache groups of seeds for future consumption in shallowly buried scatterhoards, and seeds of many plant species germinate and establish ag...

  8. Cache domains that are homologous to, but different from PAS domains comprise the largest superfamily of extracellular sensors in prokaryotes

    DOE PAGES

    Upadhyay, Amit A.; Fleetwood, Aaron D.; Adebali, Ogun; ...

    2016-04-06

    Cellular receptors usually contain a designated sensory domain that recognizes the signal. Per/Arnt/Sim (PAS) domains are ubiquitous sensors in thousands of species ranging from bacteria to humans. Although PAS domains were described as intracellular sensors, recent structural studies revealed PAS-like domains in extracytoplasmic regions in several transmembrane receptors. However, these structurally defined extracellular PAS-like domains do not match sequence-derived PAS domain models, and thus their distribution across the genomic landscape remains largely unknown. Here we show that structurally defined extracellular PAS-like domains belong to the Cache superfamily, which is homologous to, but distinct from the PAS superfamily. Our newly builtmore » computational models enabled identification of Cache domains in tens of thousands of signal transduction proteins including those from important pathogens and model organisms.Moreover, we show that Cache domains comprise the dominant mode of extracellular sensing in prokaryotes.« less

  9. Cache domains that are homologous to, but different from PAS domains comprise the largest superfamily of extracellular sensors in prokaryotes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, Amit A.; Fleetwood, Aaron D.; Adebali, Ogun

    Cellular receptors usually contain a designated sensory domain that recognizes the signal. Per/Arnt/Sim (PAS) domains are ubiquitous sensors in thousands of species ranging from bacteria to humans. Although PAS domains were described as intracellular sensors, recent structural studies revealed PAS-like domains in extracytoplasmic regions in several transmembrane receptors. However, these structurally defined extracellular PAS-like domains do not match sequence-derived PAS domain models, and thus their distribution across the genomic landscape remains largely unknown. Here we show that structurally defined extracellular PAS-like domains belong to the Cache superfamily, which is homologous to, but distinct from the PAS superfamily. Our newly builtmore » computational models enabled identification of Cache domains in tens of thousands of signal transduction proteins including those from important pathogens and model organisms.Moreover, we show that Cache domains comprise the dominant mode of extracellular sensing in prokaryotes.« less

  10. Tracking Seed Fates of Tropical Tree Species: Evidence for Seed Caching in a Tropical Forest in North-East India

    PubMed Central

    Sidhu, Swati; Datta, Aparajita

    2015-01-01

    Rodents affect the post-dispersal fate of seeds by acting either as on-site seed predators or as secondary dispersers when they scatter-hoard seeds. The tropical forests of north-east India harbour a high diversity of little-studied terrestrial murid and hystricid rodents. We examined the role played by these rodents in determining the seed fates of tropical evergreen tree species in a forest site in north-east India. We selected ten tree species (3 mammal-dispersed and 7 bird-dispersed) that varied in seed size and followed the fates of 10,777 tagged seeds. We used camera traps to determine the identity of rodent visitors, visitation rates and their seed-handling behavior. Seeds of all tree species were handled by at least one rodent taxon. Overall rates of seed removal (44.5%) were much higher than direct on-site seed predation (9.9%), but seed-handling behavior differed between the terrestrial rodent groups: two species of murid rodents removed and cached seeds, and two species of porcupines were on-site seed predators. In addition, a true cricket, Brachytrupes sp., cached seeds of three species underground. We found 309 caches formed by the rodents and the cricket; most were single-seeded (79%) and seeds were moved up to 19 m. Over 40% of seeds were re-cached from primary cache locations, while about 12% germinated in the primary caches. Seed removal rates varied widely amongst tree species, from 3% in Beilschmiedia assamica to 97% in Actinodaphne obovata. Seed predation was observed in nine species. Chisocheton cumingianus (57%) and Prunus ceylanica (25%) had moderate levels of seed predation while the remaining species had less than 10% seed predation. We hypothesized that seed traits that provide information on resource quantity would influence rodent choice of a seed, while traits that determine resource accessibility would influence whether seeds are removed or eaten. Removal rates significantly decreased (p < 0.001) while predation rates increased (p = 0

  11. Generalized enhanced suffix array construction in external memory.

    PubMed

    Louza, Felipe A; Telles, Guilherme P; Hoffmann, Steve; Ciferri, Cristina D A

    2017-01-01

    Suffix arrays, augmented by additional data structures, allow solving efficiently many string processing problems. The external memory construction of the generalized suffix array for a string collection is a fundamental task when the size of the input collection or the data structure exceeds the available internal memory. In this article we present and analyze [Formula: see text] [introduced in CPM (External memory generalized suffix and [Formula: see text] arrays construction. In: Proceedings of CPM. pp 201-10, 2013)], the first external memory algorithm to construct generalized suffix arrays augmented with the longest common prefix array for a string collection. Our algorithm relies on a combination of buffers, induced sorting and a heap to avoid direct string comparisons. We performed experiments that covered different aspects of our algorithm, including running time, efficiency, external memory access, internal phases and the influence of different optimization strategies. On real datasets of size up to 24 GB and using 2 GB of internal memory, [Formula: see text] showed a competitive performance when compared to [Formula: see text] and [Formula: see text], which are efficient algorithms for a single string according to the related literature. We also show the effect of disk caching managed by the operating system on our algorithm. The proposed algorithm was validated through performance tests using real datasets from different domains, in various combinations, and showed a competitive performance. Our algorithm can also construct the generalized Burrows-Wheeler transform of a string collection with no additional cost except by the output time.

  12. Cognitive caching promotes flexibility in task switching: evidence from event-related potentials.

    PubMed

    Lange, Florian; Seer, Caroline; Müller, Dorothea; Kopp, Bruno

    2015-12-08

    Time-consuming processes of task-set reconfiguration have been shown to contribute to the costs of switching between cognitive tasks. We describe and probe a novel mechanism serving to reduce the costs of task-set reconfiguration. We propose that when individuals are uncertain about the currently valid task, one task set is activated for execution while other task sets are maintained at a pre-active state in cognitive cache. We tested this idea by assessing an event-related potential (ERP) index of task-set reconfiguration in a three-rule task-switching paradigm involving varying degrees of task uncertainty. In high-uncertainty conditions, two viable tasks were equally likely to be correct whereas in low-uncertainty conditions, one task was more likely than the other. ERP and performance measures indicated substantial costs of task-set reconfiguration when participants were required to switch away from a task that had been likely to be correct. In contrast, task-set-reconfiguration costs were markedly reduced when the previous task set was chosen under high task uncertainty. These results suggest that cognitive caching of alternative task sets adds to human cognitive flexibility under high task uncertainty.

  13. Cognitive caching promotes flexibility in task switching: evidence from event-related potentials

    PubMed Central

    Lange, Florian; Seer, Caroline; Müller, Dorothea; Kopp, Bruno

    2015-01-01

    Time-consuming processes of task-set reconfiguration have been shown to contribute to the costs of switching between cognitive tasks. We describe and probe a novel mechanism serving to reduce the costs of task-set reconfiguration. We propose that when individuals are uncertain about the currently valid task, one task set is activated for execution while other task sets are maintained at a pre-active state in cognitive cache. We tested this idea by assessing an event-related potential (ERP) index of task-set reconfiguration in a three-rule task-switching paradigm involving varying degrees of task uncertainty. In high-uncertainty conditions, two viable tasks were equally likely to be correct whereas in low-uncertainty conditions, one task was more likely than the other. ERP and performance measures indicated substantial costs of task-set reconfiguration when participants were required to switch away from a task that had been likely to be correct. In contrast, task-set-reconfiguration costs were markedly reduced when the previous task set was chosen under high task uncertainty. These results suggest that cognitive caching of alternative task sets adds to human cognitive flexibility under high task uncertainty. PMID:26643146

  14. Ten dimensions of health and their relationships with overall self-reported health and survival in a predominately religiously active elderly population: the cache county memory study.

    PubMed

    Østbye, Truls; Krause, Katrina M; Norton, Maria C; Tschanz, JoAnn; Sanders, Linda; Hayden, Kathleen; Pieper, Carl; Welsh-Bohmer, Kathleen A

    2006-02-01

    To document the extent of healthy aging along 10 different dimensions in a population known for its longevity. A cohort study with baseline measures of overall self-reported health and health along 10 specific dimensions; analyses investigated the 10 dimensions as predictors of self-reported health and 10-year mortality. Cache County, Utah, which is among the areas with the highest conditional life expectancy at age 65 in the United States. Inhabitants of Cache County aged 65 and older (January 1, 1995). Self-reported overall health and 10 specific dimensions of healthy aging: independent living, vision, hearing, activities of daily living, instrumental activities of daily living, absence of physical illness, cognition, healthy mood, social support and participation, and religious participation and spirituality. This elderly population was healthy overall. With few exceptions, 80% to 90% of persons aged 65 to 75 were healthy according to each measure used. Prevalence of excellent and good self-reported health decreased with age, to approximately 60% in those aged 85 and older. Even in the oldest old, the majority of respondents were independent in activities of daily living. Although vision, hearing, and mood were significant predictors of overall self-reported health in the final models, age, sex, and cognition were significant only in the final survival models. This population has a high prevalence of most factors representing healthy aging. The predictors of overall self-reported health are distinct from the predictors of survival in this age group and, being potentially modifiable, are amenable to clinical and public health efforts.

  15. Mercury and Methylmercury concentrations and loads in Cache Creek Basin, California, January 2000 through May 2001

    USGS Publications Warehouse

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, Darrell G.; Suchanek, Thomas H.; Ayers, Shaun M.

    2004-01-01

    Concentrations and mass loads of total mercury and methylmercury in streams draining abandoned mercury mines and near geothermal discharge in Cache Creek Basin, California, were measured during a 17-month period from January 2000 through May 2001. Rainfall and runoff averages during the study period were lower than long-term averages. Mass loads of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, were generally the highest during or after winter rainfall events. During the study period, mass loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas because of a lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a source of mercury and methylmercury to downstream receiving bodies of water such as the Delta of the San Joaquin and Sacramento Rivers. Much of the mercury in these sediments was deposited over the last 150 years by erosion and stream discharge from abandoned mines or by continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas. These constituents included aqueous concentrations of boron, chloride, lithium, and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges were enriched with more oxygen-18 relative to oxygen-16 than meteoric waters, whereas the enrichment by stable isotopes of water from much of the runoff from abandoned mines was similar to that of meteoric water. Geochemical signatures from stable isotopes and trace-element concentrations may be useful as tracers of total mercury or methylmercury from specific locations; however, mercury and methylmercury are not conservatively transported. A distinct mixing trend of

  16. dCache, towards Federated Identities & Anonymized Delegation

    NASA Astrophysics Data System (ADS)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  17. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    DOE PAGES

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron; ...

    2017-11-21

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less

  18. Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron

    Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less

  19. Pattern recognition for cache management in distributed medical imaging environments.

    PubMed

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  20. Mnemonic Effect of Iconic Gesture and Beat Gesture in Adults and Children: Is Meaning in Gesture Important for Memory Recall?

    ERIC Educational Resources Information Center

    So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low

    2012-01-01

    Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…

  1. Software Exploit Prevention and Remediation via Software Memory Protection

    DTIC Science & Technology

    2009-05-01

    trampolines that are necessary. Trampolines are pieces of code emitted into the fragment cache to transfer con- trol back to Strata. Most control...transfer instructions (CTIs) are initially linked to trampolines (unless the transfer target already exists in the fragment cache). Once a CTI’s target...instruction becomes available in the fragment cache, the CTI is linked directly to the destination, avoiding future uses of the trampoline . This

  2. A brief metacognition questionnaire for the elderly: comparison with cognitive performance and informant ratings the Cache County Study.

    PubMed

    Buckley, Trevor; Norton, Maria C; Deberard, M Scott; Welsh-Bohmer, Kathleen A; Tschanz, JoAnn T

    2010-07-01

    To examine the utility of a brief, metacognition questionnaire by examining its association with objective cognitive testing and informant ratings. We hypothesized that the association between self-ratings of change and both outcomes would be greater among individuals without dementia than among those with dementia. Participants were 535 persons without dementia and 152 with dementia from the Cache County Memory Study who had completed a metacognition questionnaire, two administrations of the Modified Mini-Mental State Exam (3 MS) and who had data on the Informant Questionnaire of Cognitive Decline in the Elderly (IQCODE). Cronbach's alpha was calculated as a measure of internal consistency of the metacognition questionnaire. Multiple regression was used to examine the relationship between metacognition and 3 MS change. Logistic regression was used to examine the relationship between metacognition and IQCODE ratings (no change vs. worse). Cronbach's alpha was 0.75. Among individuals without dementia, metacognition significantly predicted 3 MS change (p = .027) and IQCODE ratings (OR = 4.0, 95% CI = 1.2-13.8, p = .029), suggesting consistency among measures. For those with dementia, there was a weak, inverse relationship between 3 MS change and metacognition (r = -0.16, p = .056). IQCODE ratings were not significantly associated with metacognition (p = .729). Degree of dementia severity did not modify the relationship between metacognition and either outcome (p > .05). We demonstrated adequate internal consistency and evidence for validity of a brief metacognition questionnaire. The questionnaire may provide a useful adjunct to memory and functional assessments for assessing anosognosia in elderly populations. (c) 2009 John Wiley & Sons, Ltd.

  3. Vascular risk factors and neuropsychiatric symptoms in Alzheimer's disease: the Cache County Study.

    PubMed

    Steinberg, Martin; Hess, Kyle; Corcoran, Chris; Mielke, Michelle M; Norton, Maria; Breitner, John; Green, Robert; Leoutsakos, Jeannie; Welsh-Bohmer, Kathleen; Lyketsos, Constantine; Tschanz, Joann

    2014-02-01

    Knowledge of potentially modifiable risk factors for neuropsychiatric symptoms (NPS) in Alzheimer's disease (AD) is important. This study longitudinally explores modifiable vascular risk factors for NPS in AD. Participants enrolled in the Cache County Study on Memory in Aging with no dementia at baseline were subsequently assessed over three additional waves, and those with incident (new onset) dementia were invited to join the Dementia Progression Study for longitudinal follow-up. A total of 327 participants with incident AD were identified and assessed for the following vascular factors: atrial fibrillation, hypertension, diabetes mellitus, angina, coronary artery bypass surgery, myocardial infarction, cerebrovascular accident, and use of antihypertensive or diabetes medicines. A vascular index (VI) was also calculated. NPS were assessed over time using the Neuropsychiatric Inventory (NPI). Affective and Psychotic symptom clusters were assessed separately. The association between vascular factors and change in NPI total score was analyzed using linear mixed model and in symptom clusters using a random effects model. No individual vascular risk factors or the VI significantly predicted change in any individual NPS. The use of antihypertensive medications more than four times per week was associated with higher total NPI and Affective cluster scores. Use of antihypertensive medication was associated with higher total NPI and Affective cluster scores. The results of this study do not otherwise support vascular risk factors as modifiers of longitudinal change in NPS in AD. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Mental and behavioral disturbances in dementia: findings from the Cache County Study on Memory in Aging.

    PubMed

    Lyketsos, C G; Steinberg, M; Tschanz, J T; Norton, M C; Steffens, D C; Breitner, J C

    2000-05-01

    The authors report findings from a study of 5,092 community residents who constituted 90% of the elderly resident population of Cache County, Utah. The 5,092 participants, who were 65 years old or older, were screened for dementia. Based on the results of this screen, 1,002 participants (329 with dementia and 673 without dementia) underwent comprehensive neuropsychiatric examinations and were rated on the Neuropsychiatric Inventory, a widely used method for ascertainment and classification of dementia-associated mental and behavioral disturbances. Of the 329 participants with dementia, 214 (65%) had Alzheimer's disease, 62 (19%) had vascular dementia, and 53 (16%) had another DSM-IV dementia diagnosis; 201 (61%) had exhibited one or more mental or behavioral disturbances in the past month. Apathy (27%), depression (24%), and agitation/aggression (24%) were the most common in participants with dementia. These disturbances were almost four times more common in participants with dementia than in those without. Only modest differences were observed in the prevalence of mental or behavioral disturbances in different types of dementia or at different stages of illness: participants with Alzheimer's disease were more likely to have delusions and less likely to have depression. Agitation/aggression and aberrant motor behavior were more common in participants with advanced dementia. On the basis of their findings in this large community population of elderly people, the authors conclude that a wide range of dementia-associated mental and behavioral disturbances afflict the majority of individuals with dementia. Because of their frequency and their adverse effects on patients and their caregivers, these disturbances should be ascertained and treated in all cases of dementia.

  5. Indel-tolerant read mapping with trinucleotide frequencies using cache-oblivious kd-trees.

    PubMed

    Mahmud, Md Pavel; Wiedenhoeft, John; Schliep, Alexander

    2012-09-15

    Mapping billions of reads from next generation sequencing experiments to reference genomes is a crucial task, which can require hundreds of hours of running time on a single CPU even for the fastest known implementations. Traditional approaches have difficulties dealing with matches of large edit distance, particularly in the presence of frequent or large insertions and deletions (indels). This is a serious obstacle both in determining the spectrum and abundance of genetic variations and in personal genomics. For the first time, we adopt the approximate string matching paradigm of geometric embedding to read mapping, thus rephrasing it to nearest neighbor queries in a q-gram frequency vector space. Using the L(1) distance between frequency vectors has the benefit of providing lower bounds for an edit distance with affine gap costs. Using a cache-oblivious kd-tree, we realize running times, which match the state-of-the-art. Additionally, running time and memory requirements are about constant for read lengths between 100 and 1000 bp. We provide a first proof-of-concept that geometric embedding is a promising paradigm for read mapping and that L(1) distance might serve to detect structural variations. TreQ, our initial implementation of that concept, performs more accurate than many popular read mappers over a wide range of structural variants. TreQ will be released under the GNU Public License (GPL), and precomputed genome indices will be provided for download at http://treq.sf.net. pavelm@cs.rutgers.edu Supplementary data are available at Bioinformatics online.

  6. Indel-tolerant read mapping with trinucleotide frequencies using cache-oblivious kd-trees

    PubMed Central

    Mahmud, Md Pavel; Wiedenhoeft, John; Schliep, Alexander

    2012-01-01

    Motivation: Mapping billions of reads from next generation sequencing experiments to reference genomes is a crucial task, which can require hundreds of hours of running time on a single CPU even for the fastest known implementations. Traditional approaches have difficulties dealing with matches of large edit distance, particularly in the presence of frequent or large insertions and deletions (indels). This is a serious obstacle both in determining the spectrum and abundance of genetic variations and in personal genomics. Results: For the first time, we adopt the approximate string matching paradigm of geometric embedding to read mapping, thus rephrasing it to nearest neighbor queries in a q-gram frequency vector space. Using the L1 distance between frequency vectors has the benefit of providing lower bounds for an edit distance with affine gap costs. Using a cache-oblivious kd-tree, we realize running times, which match the state-of-the-art. Additionally, running time and memory requirements are about constant for read lengths between 100 and 1000 bp. We provide a first proof-of-concept that geometric embedding is a promising paradigm for read mapping and that L1 distance might serve to detect structural variations. TreQ, our initial implementation of that concept, performs more accurate than many popular read mappers over a wide range of structural variants. Availability and implementation: TreQ will be released under the GNU Public License (GPL), and precomputed genome indices will be provided for download at http://treq.sf.net. Contact: pavelm@cs.rutgers.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22962448

  7. Observations of territorial breeding common ravens caching eggs of greater sage-grouse

    USGS Publications Warehouse

    Howe, Kristy B.; Coates, Peter S.

    2015-01-01

    Previous investigations using continuous video monitoring of greater sage-grouse Centrocercus urophasianus nests have unambiguously identified common ravens Corvus corax as an important egg predator within the western United States. The quantity of greater sage-grouse eggs an individual common raven consumes during the nesting period and the extent to which common ravens actively hunt greater sage-grouse nests are largely unknown. However, some evidence suggests that territorial breeding common ravens, rather than nonbreeding transients, are most likely responsible for nest depredations. We describe greater sage-grouse egg depredation observations obtained opportunistically from three common raven nests located in Idaho and Nevada where depredated greater sage-grouse eggs were found at or in the immediate vicinity of the nest site, including the caching of eggs in nearby rock crevices. We opportunistically monitored these nests by counting and removing depredated eggs and shell fragments from the nest sites during each visit to determine the extent to which the common raven pairs preyed on greater sage-grouse eggs. To our knowledge, our observations represent the first evidence that breeding, territorial pairs of common ravens cache greater sage-grouse eggs and are capable of depredating multiple greater sage-grouse nests.

  8. Geochemistry of mercury and other constituents in subsurface sediment—Analyses from 2011 and 2012 coring campaigns, Cache Creek Settling Basin, Yolo County, California

    USGS Publications Warehouse

    Arias, Michelle R.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.; Fuller, Christopher C.; Agee, Jennifer L.; Sneed, Michelle; Morita, Andrew Y.; Salas, Antonia

    2017-10-31

    Cache Creek Settling Basin was constructed in 1937 to trap sediment from Cache Creek before delivery to the Yolo Bypass, a flood conveyance for the Sacramento River system that is tributary to the Sacramento–San Joaquin Delta. Sediment management options being considered by stakeholders in the Cache Creek Settling Basin include sediment excavation; however, that could expose sediments containing elevated mercury concentrations from historical mercury mining in the watershed. In cooperation with the California Department of Water Resources, the U.S. Geological Survey undertook sediment coring campaigns in 2011–12 (1) to describe lateral and vertical distributions of mercury concentrations in deposits of sediment in the Cache Creek Settling Basin and (2) to improve constraint of estimates of the rate of sediment deposition in the basin.Sediment cores were collected in the Cache Creek Settling Basin, Yolo County, California, during October 2011 at 10 locations and during August 2012 at 5 other locations. Total core depths ranged from approximately 4.6 to 13.7 meters (15 to 45 feet), with penetration to about 9.1 meters (30 feet) at most locations. Unsplit cores were logged for two geophysical parameters (gamma bulk density and magnetic susceptibility); then, selected cores were split lengthwise. One half of each core was then photographed and archived, and the other half was subsampled. Initial subsamples from the cores (20-centimeter composite samples from five predetermined depths in each profile) were analyzed for total mercury, methylmercury, total reduced sulfur, iron speciation, organic content (as the percentage of weight loss on ignition), and grain-size distribution. Detailed follow-up subsampling (3-centimeter intervals) was done at six locations along an east-west transect in the southern part of the Cache Creek Settling Basin and at one location in the northern part of the basin for analyses of total mercury; organic content; and cesium-137, which was

  9. k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.

    PubMed

    Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis

    2015-06-01

    Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.

  10. Multiple channel data acquisition system

    DOEpatents

    Crawley, H. Bert; Rosenberg, Eli I.; Meyer, W. Thomas; Gorbics, Mark S.; Thomas, William D.; McKay, Roy L.; Homer, Jr., John F.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler.

  11. Multiple channel data acquisition system

    DOEpatents

    Crawley, H.B.; Rosenberg, E.I.; Meyer, W.T.; Gorbics, M.S.; Thomas, W.D.; McKay, R.L.; Homer, J.F. Jr.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler. 25 figs.

  12. Macroinvertebrate Responses to Constructed Riffles in the Cache River, Illinois, USA

    NASA Astrophysics Data System (ADS)

    Walther, Denise A.; Whiles, Matt R.

    2008-04-01

    Stream restoration practices are becoming increasingly common, but biological assessments of these improvements are still limited. Rock weirs, a type of constructed riffle, were implemented in the upper Cache River in southern Illinois, USA, in 2001 and 2003-2004 to control channel incision and protect high quality riparian wetlands as part of an extensive watershed-level restoration. Construction of the rock weirs provided an opportunity to examine biological responses to a common in-stream restoration technique. We compared macroinvertebrate assemblages on previously constructed rock weirs and newly constructed weirs to those on snags and scoured clay streambed, the two dominant substrates in the unrestored reaches of the river. We quantitatively sampled macroinvertebrates on these substrates on seven occasions during 2003 and 2004. Ephemeroptera, Plecoptera, and Trichoptera (EPT) biomass and aquatic insect biomass were significantly higher on rock weirs than the streambed for most sample periods. Snags supported intermediate EPT and aquatic insect biomass compared to rock weirs and the streambed. Nonmetric multidimensional scaling (NMDS) ordinations for 2003 and 2004 revealed distinct assemblage groups for rock weirs, snags, and the streambed. Analysis of similarity supported visual interpretation of NMDS plots. All pair-wise substrate comparisons differed significantly, except recently constructed weirs versus older weirs. Results indicate positive responses by macroinvertebrate assemblages to in-stream restoration in the Cache River. Moreover, these responses were not evident with more common measures of total density, biomass, and diversity.

  13. Killing of a muskox, Ovibus moschatus, by two wolves, Canis lupis, and subsequent caching

    USGS Publications Warehouse

    Mech, L. David; Adams, Layne G.

    1999-01-01

    The killing of a cow Muskox (Ovibos moschatus) by two Wolves (Canis lupus) in 5 minutes during summer on Ellesmere Island is described. After two of the four feedings observed, one Wolf cached a leg and regurgitated food as far as 2.3 km away and probably farther. The implications of this behavior for deriving food-consumption estimates are discussed.

  14. The construction of meaning.

    PubMed

    Kintsch, Walter; Mangalath, Praful

    2011-04-01

    We argue that word meanings are not stored in a mental lexicon but are generated in the context of working memory from long-term memory traces that record our experience with words. Current statistical models of semantics, such as latent semantic analysis and the Topic model, describe what is stored in long-term memory. The CI-2 model describes how this information is used to construct sentence meanings. This model is a dual-memory model, in that it distinguishes between a gist level and an explicit level. It also incorporates syntactic information about how words are used, derived from dependency grammar. The construction of meaning is conceptualized as feature sampling from the explicit memory traces, with the constraint that the sampling must be contextually relevant both semantically and syntactically. Semantic relevance is achieved by sampling topically relevant features; local syntactic constraints as expressed by dependency relations ensure syntactic relevance. Copyright © 2010 Cognitive Science Society, Inc.

  15. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  16. Side Channel Attacks on STTRAM and Low Overhead Countermeasures

    DTIC Science & Technology

    2017-03-20

    introduce security vulnerabilities and expose the cache memory to side channel attacks. In this paper, we propose a side channel attack (SCA) model...where the adversary can monitor the supply current of the memory array to partially identify the sensi- tive cache data that is being read or written. We...propose solutions such as short retention STTRAM, obfuscation of SCA using 1-bit parity, multi-bit random write, and, neutral- izing the SCA using

  17. The history of scatter hoarding studies.

    PubMed

    Brodin, Anders

    2010-03-27

    In this review, I will present an overview of the development of the field of scatter hoarding studies. Scatter hoarding is a conspicuous behaviour and it has been observed by humans for a long time. Apart from an exceptional experimental study already published in 1720, it started with observational field studies of scatter hoarding birds in the 1940s. Driven by a general interest in birds, several ornithologists made large-scale studies of hoarding behaviour in species such as nutcrackers and boreal titmice. Scatter hoarding birds seem to remember caching locations accurately, and it was shown in the 1960s that successful retrieval is dependent on a specific part of the brain, the hippocampus. The study of scatter hoarding, spatial memory and the hippocampus has since then developed into a study system for evolutionary studies of spatial memory. In 1978, a game theoretical paper started the era of modern studies by establishing that a recovery advantage is necessary for individual hoarders for the evolution of a hoarding strategy. The same year, a combined theoretical and empirical study on scatter hoarding squirrels investigated how caches should be spaced out in order to minimize cache loss, a phenomenon sometimes called optimal cache density theory. Since then, the scatter hoarding paradigm has branched into a number of different fields: (i) theoretical and empirical studies of the evolution of hoarding, (ii) field studies with modern sampling methods, (iii) studies of the precise nature of the caching memory, (iv) a variety of studies of caching memory and its relationship to the hippocampus. Scatter hoarding has also been the subject of studies of (v) coevolution between scatter hoarding animals and the plants that are dispersed by these.

  18. Mitochondrial genomic analysis of late onset Alzheimer's disease reveals protective haplogroups H6A1A/H6A1B: the Cache County Study on Memory in Aging.

    PubMed

    Ridge, Perry G; Maxwell, Taylor J; Corcoran, Christopher D; Norton, Maria C; Tschanz, Joann T; O'Brien, Elizabeth; Kerber, Richard A; Cawthon, Richard M; Munger, Ronald G; Kauwe, John S K

    2012-01-01

    Alzheimer's disease (AD) is the most common cause of dementia and AD risk clusters within families. Part of the familial aggregation of AD is accounted for by excess maternal vs. paternal inheritance, a pattern consistent with mitochondrial inheritance. The role of specific mitochondrial DNA (mtDNA) variants and haplogroups in AD risk is uncertain. We determined the complete mitochondrial genome sequence of 1007 participants in the Cache County Study on Memory in Aging, a population-based prospective cohort study of dementia in northern Utah. AD diagnoses were made with a multi-stage protocol that included clinical examination and review by a panel of clinical experts. We used TreeScanning, a statistically robust approach based on haplotype networks, to analyze the mtDNA sequence data. Participants with major mitochondrial haplotypes H6A1A and H6A1B showed a reduced risk of AD (p=0.017, corrected for multiple comparisons). The protective haplotypes were defined by three variants: m.3915G>A, m.4727A>G, and m.9380G>A. These three variants characterize two different major haplogroups. Together m.4727A>G and m.9380G>A define H6A1, and it has been suggested m.3915G>A defines H6A. Additional variants differentiate H6A1A and H6A1B; however, none of these variants had a significant relationship with AD case-control status. Our findings provide evidence of a reduced risk of AD for individuals with mtDNA haplotypes H6A1A and H6A1B. These findings are the results of the largest study to date with complete mtDNA genome sequence data, yet the functional significance of the associated haplotypes remains unknown and replication in others studies is necessary.

  19. Feasibility Report and Environmental Statement for Water Resources Development, Cache Creek Basin, California

    DTIC Science & Technology

    1979-02-01

    classified as Porno , Lake Miwok, and Patwin. Recent surveys within the Clear Lake-Cache Creek Basin have located 28 archeological sites, some of which...additional 8,400 acre-feet annually to the Lakeport area. Porno Reservoir on Kelsey Creek, being studied by Lake County, also would supplement M&l water...project on Scotts Creek could provide 9,100 acre- feet annually of irrigation water. Also, as previously discussed, Porno Reservoir would furnish

  20. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  1. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    ERIC Educational Resources Information Center

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  2. Improve Performance of Data Warehouse by Query Cache

    NASA Astrophysics Data System (ADS)

    Gour, Vishal; Sarangdevot, S. S.; Sharma, Anand; Choudhary, Vinod

    2010-11-01

    The primary goal of data warehouse is to free the information locked up in the operational database so that decision makers and business analyst can make queries, analysis and planning regardless of the data changes in operational database. As the number of queries is large, therefore, in certain cases there is reasonable probability that same query submitted by the one or multiple users at different times. Each time when query is executed, all the data of warehouse is analyzed to generate the result of that query. In this paper we will study how using query cache improves performance of Data Warehouse and try to find the common problems faced. These kinds of problems are faced by Data Warehouse administrators which are minimizes response time and improves the efficiency of query in data warehouse overall, particularly when data warehouse is updated at regular interval.

  3. Depositional environments of the Cache, Lower Lake, and Kelseyville Formations, Lake County, California

    USGS Publications Warehouse

    Rymer, Michael J.; Roth, Barry; Bradbury, J. Platt; Forester, Richard M.

    1988-01-01

    We describe the depositional environments of the Cache, Lower Lake, and Kelseyville Formations in light of habitat preferences of recovered mollusks, ostracodes, and diatoms. Our reconstruction of paleoenvironments for these late Cenozoic deposits provides a framework for an understanding of basin evolution and deposition in the Clear Lake region. The Pliocene and Pleistocene Cache Formation was deposited primarily in stream and debris flow environments; fossils from fine-grained deposits indicate shallow, fresh-water environments with locally abundant aquatic vegetation. The fine-grained sediments (mudstone and siltstone) were probably deposited in ponds in abandoned channels or shallow basins behind natural levees. The abandoned channels and shallow basins were associated with the fluvial systems responsible for deposition of the bulk of the technically controlled Cache Formation. The Pleistocene Lower Lake Formation was deposited in a water mass large enough to contain a variety of local environments and current regimes. The recovered fossils imply a lake with water depths of 1 to 5 m. However, there is strong support from habitat preferences of the recovered fossils for inferring a wide range of water depths during deposition of the Lower Lake Formation; they indicate a progressively shallowing system and the culmination of a desiccating lacustrine system. The Pleistocene Kelseyville Formation represents primarily lacustrine deposition with only minor fluvial deposits around the margins of the basin. Local conglomerate beds and fossil tree stumps in growth position within the basin indicate occasional widespread fluvial incursions and depositional hiatuses. The Kelseyville strata represent a large water mass with a muddy and especially fluid substrate having permanent or sporadic periods of anoxia. Central-lake anoxia, whether permanent or at irregular intervals, is the simplest way to account for the low numbers of benthic organisms recovered from the

  4. Using dCache in Archiving Systems oriented to Earth Observation

    NASA Astrophysics Data System (ADS)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  5. Towards Transparent Throughput Elasticity for IaaS Cloud Storage: Exploring the Benefits of Adaptive Block-Level Caching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, Bogdan; Riteau, Pierre; Keahey, Kate

    Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently tomore » estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.« less

  6. Simulation of flow and habitat conditions under ice, Cache la Poudre River - January 2006

    USGS Publications Warehouse

    Waddle, Terry

    2007-01-01

    The objectives of this study are (1) to describe the extent and thickness of ice cover, (2) simulate depth and velocity under ice at the study site for observed and reduced flows, and (3) to quantify fish habitat in this portion of the mainstem Cache la Poudre River for the current winter release schedule as well as for similar conditions without the 0.283 m3/s winter release.

  7. Mitochondrial Genomic Analysis of Late Onset Alzheimer’s Disease Reveals Protective Haplogroups H6A1A/H6A1B: The Cache County Study on Memory in Aging

    PubMed Central

    Ridge, Perry G.; Maxwell, Taylor J.; Corcoran, Christopher D.; Norton, Maria C.; Tschanz, JoAnn T.; O’Brien, Elizabeth; Kerber, Richard A.; Cawthon, Richard M.; Munger, Ronald G.; Kauwe, John S. K.

    2012-01-01

    Background Alzheimer’s disease (AD) is the most common cause of dementia and AD risk clusters within families. Part of the familial aggregation of AD is accounted for by excess maternal vs. paternal inheritance, a pattern consistent with mitochondrial inheritance. The role of specific mitochondrial DNA (mtDNA) variants and haplogroups in AD risk is uncertain. Methodology/Principal Findings We determined the complete mitochondrial genome sequence of 1007 participants in the Cache County Study on Memory in Aging, a population-based prospective cohort study of dementia in northern Utah. AD diagnoses were made with a multi-stage protocol that included clinical examination and review by a panel of clinical experts. We used TreeScanning, a statistically robust approach based on haplotype networks, to analyze the mtDNA sequence data. Participants with major mitochondrial haplotypes H6A1A and H6A1B showed a reduced risk of AD (p = 0.017, corrected for multiple comparisons). The protective haplotypes were defined by three variants: m.3915G>A, m.4727A>G, and m.9380G>A. These three variants characterize two different major haplogroups. Together m.4727A>G and m.9380G>A define H6A1, and it has been suggested m.3915G>A defines H6A. Additional variants differentiate H6A1A and H6A1B; however, none of these variants had a significant relationship with AD case-control status. Conclusions/Significance Our findings provide evidence of a reduced risk of AD for individuals with mtDNA haplotypes H6A1A and H6A1B. These findings are the results of the largest study to date with complete mtDNA genome sequence data, yet the functional significance of the associated haplotypes remains unknown and replication in others studies is necessary. PMID:23028804

  8. The effect of patterning options on embedded memory cells in logic technologies at iN10 and iN7

    NASA Astrophysics Data System (ADS)

    Appeltans, Raf; Weckx, Pieter; Raghavan, Praveen; Kim, Ryoung-Han; Kar, Gouri Sankar; Furnémont, Arnaud; Van der Perre, Liesbet; Dehaene, Wim

    2017-03-01

    Static Random Access Memory (SRAM) cells are used together with logic standard cells as the benchmark to develop the process flow for new logic technologies. In order to achieve successful integration of Spin-Transfer Torque Magnetic Random Access Memory (STT-MRAM) as area efficient higher level embedded cache, it also needs to be included as a benchmark. The simple cell structure of STT-MRAM brings extra patterning challenges to achieve high density. The two memory types are compared in terms of minimum area and critical design rules in both the iN10 and iN7 node, with an extra focus on patterning options in iN7. Both the use of Self-Aligned Quadruple Patterning (SAQP) mandrel and spacer engineering, as well as multi-level via's are explored. These patterning options result in large area gains for the STT-MRAM cell and moreover determine which cell variant is the smallest.

  9. Externalising the autobiographical self: sharing personal memories online facilitated memory retention.

    PubMed

    Wang, Qi; Lee, Dasom; Hou, Yubo

    2017-07-01

    Internet technology provides a new means of recalling and sharing personal memories in the digital age. What is the mnemonic consequence of posting personal memories online? Theories of transactive memory and autobiographical memory would make contrasting predictions. In the present study, college students completed a daily diary for a week, listing at the end of each day all the events that happened to them on that day. They also reported whether they posted any of the events online. Participants received a surprise memory test after the completion of the diary recording and then another test a week later. At both tests, events posted online were significantly more likely than those not posted online to be recalled. It appears that sharing memories online may provide unique opportunities for rehearsal and meaning-making that facilitate memory retention.

  10. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  11. Fecal corticosterone, body mass, and caching rates of Carolina chickadees (Poecile carolinensis) from disturbed and undisturbed sites

    PubMed Central

    Lucas, Jeffrey R.; Freeberg, Todd M.; Egbert, Jeremy; Schwabl, Hubert

    2006-01-01

    We tested for hormonal and behavioral differences between Carolina chickadees (Poecile carolinensis) taken from a disturbed (recently logged) forest, an undisturbed forest, or a residential site. We measured fecal corticosterone and body mass levels in the field, and fecal corticosterone, body mass, and caching behavior in an aviary experiment. In the field, birds from the disturbed forest exhibited significantly higher fecal corticosterone levels than birds from either the undisturbed forest or from the residential site. Birds from the disturbed forest also exhibited lower body mass than those from the undisturbed forest but higher body mass than those from the residential site. Our aviary results suggest that these physiological differences between field sites are the result of short-term responses to ecological factors: Neither body mass nor fecal corticosterone levels varied between birds captured at different sites. Aviary sample sizes were sufficient to detect seasonal variation in fecal corticosterone (lowest in summer), body mass (highest in spring), and rate of gain in body mass (highest in winter). Under “closed-economy” aviary conditions (all food available from a feeder in the aviary), there were no site differences in the percent of seeds taken from the feeder that were cached. However, under “open-economy” conditions (food occasionally available ad libitum), significantly fewer seeds were cached by birds from the disturbed forest compared to the undisturbed or residential sites. On average, there was only a two-fold difference in population-levels of fecal corticosterone. This difference is about the same as an increase in fecal corticosterone induced by a two-hour increase in food deprivation, and can not be considered to be an acute stress response to disturbance. PMID:16458312

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  13. Distinctive Features Hold a Privileged Status in the Computation of Word Meaning: Implications for Theories of Semantic Memory

    ERIC Educational Resources Information Center

    Cree, George S.; McNorgan, Chris; McRae, Ken

    2006-01-01

    The authors present data from 2 feature verification experiments designed to determine whether distinctive features have a privileged status in the computation of word meaning. They use an attractor-based connectionist model of semantic memory to derive predictions for the experiments. Contrary to central predictions of the conceptual structure…

  14. The effect of environmental harshness on neurogenesis: a large-scale comparison.

    PubMed

    Chancellor, Leia V; Roth, Timothy C; LaDage, Lara D; Pravosudov, Vladimir V

    2011-03-01

    Harsh environmental conditions may produce strong selection pressure on traits, such as memory, that may enhance fitness. Enhanced memory may be crucial for survival in animals that use memory to find food and, thus, particularly important in environments where food sources may be unpredictable. For example, animals that cache and later retrieve their food may exhibit enhanced spatial memory in harsh environments compared with those in mild environments. One way that selection may enhance memory is via the hippocampus, a brain region involved in spatial memory. In a previous study, we established a positive relationship between environmental severity and hippocampal morphology in food-caching black-capped chickadees (Poecile atricapillus). Here, we expanded upon this previous work to investigate the relationship between environmental harshness and neurogenesis, a process that may support hippocampal cytoarchitecture. We report a significant and positive relationship between the degree of environmental harshness across several populations over a large geographic area and (1) the total number of immature hippocampal neurons, (2) the number of immature neurons relative to the hippocampal volume, and (3) the number of immature neurons relative to the total number of hippocampal neurons. Our results suggest that hippocampal neurogenesis may play an important role in environments where increased reliance on memory for cache recovery is critical. Copyright © 2010 Wiley Periodicals, Inc.

  15. Assessment of watershed vulnerability to climate change for the Uinta-Wasatch-Cache and Ashley National Forests, Utah

    Treesearch

    Janine Rice; Tim Bardsley; Pete Gomben; Dustin Bambrough; Stacey Weems; Sarah Leahy; Christopher Plunkett; Charles Condrat; Linda A. Joyce

    2017-01-01

    Watersheds on the Uinta-Wasatch-Cache and Ashley National Forests provide many ecosystem services, and climate change poses a risk to these services. We developed a watershed vulnerability assessment to provide scientific information for land managers facing the challenge of managing these watersheds. Literature-based information and expert elicitation is used to...

  16. Mercury and methylmercury concentrations and loads in the Cache Creek watershed, California

    USGS Publications Warehouse

    Domagalski, Joseph L.; Alpers, Charles N.; Slotton, D.G.; Suchanek, T.H.; Ayers, S.M.

    2004-01-01

    Concentrations and loads of total mercury and methylmercury were measured in streams draining abandoned mercury mines and in the proximity of geothermal discharge in the Cache Creek watershed of California during a 17-month period from January 2000 through May 2001. Rainfall and runoff were lower than long-term averages during the study period. The greatest loading of mercury and methylmercury from upstream sources to downstream receiving waters, such as San Francisco Bay, generally occurred during or after winter rainfall events. During the study period, loads of mercury and methylmercury from geothermal sources tended to be greater than those from abandoned mining areas, a pattern attributable to the lack of large precipitation events capable of mobilizing significant amounts of either mercury-laden sediment or dissolved mercury and methylmercury from mine waste. Streambed sediments of Cache Creek are a significant source of mercury and methylmercury to downstream receiving bodies of water. Much of the mercury in these sediments is the result of deposition over the last 100-150 years by either storm-water runoff, from abandoned mines, or continuous discharges from geothermal areas. Several geochemical constituents were useful as natural tracers for mining and geothermal areas, including the aqueous concentrations of boron, chloride, lithium and sulfate, and the stable isotopes of hydrogen and oxygen in water. Stable isotopes of water in areas draining geothermal discharges showed a distinct trend toward enrichment of 18O compared with meteoric waters, whereas much of the runoff from abandoned mines indicated a stable isotopic pattern more consistent with local meteoric water. ?? 2004 Elsevier B.V. All rights reserved.

  17. Petrochemistry of Mafic Rocks Within the Northern Cache Creek Terrane, NW British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    English, J. M.; Johnston, S. T.; Mihalynuk, M. G.

    2002-12-01

    The Cache Creek terrane is a belt of oceanic rocks that extend the length of the Cordillera in British Columbia. Fossil fauna in this belt are exotic with respect to the remainder of the Canadian Cordillera, as they are of equatorial Tethyan affinity, contrasting with coeval faunas in adjacent terranes that show closer linkages with ancestral North America. Preliminary results reported here from geochemical studies of mafic rocks within the Nakina area of NW British Columbia further constrain the origin of this enigmatic terrane. The terrane is typified by tectonically imbricated slices of chert, argillite, limestone, wacke and volcaniclastic rocks, as well as mafic and ultramafic rocks. These lithologies are believed to represent two separate lithotectonic elements: Upper Triassic to Lower Jurassic, subduction-related accretionary complexes, and dismembered basement assemblages emplaced during the closure of the Cache Creek ocean in the Middle Jurassic. Petrochemical analysis revealed four distinct mafic igneous assemblages that include: magmatic 'knockers' of the Nimbus serpentinite mélange, metabasalts of 'Blackcaps' Mountain, augite-phyric breccias of 'Laughing Moose' Creek, and volcanic pediments to the reef-forming carbonates of the Horsefeed Formation. Major and trace element analysis classifies the 'Laughing Moose' breccias and the carbonate-associated volcanics as alkaline in nature, whereas the rest are subalkaline. Tectonic discrimination diagrams show that the alkaline rocks are of within-plate affinity, while the 'Blackcaps' basalts and 'knockers' from within the mélange typically straddle the island-arc tholeiite and the mid-ocean ridge boundaries. However, primitive mantle normalized multi-element plots indicate that these subalkaline rocks have pronounced negative Nb anomalies, a characteristic arc signature. The spatial association of alkaline volcanic rocks with extensive carbonate domains points to the existence of seamounts within the Cache

  18. A High-Precision Counter Using the DSP Technique

    DTIC Science & Technology

    2004-09-01

    DSP is not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. So we cut the...sampling number in a cycle is not good enough to achieve an accuracy less than 2×10-11. For this reason, a correlation operation is performed for... not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. We will solve this

  19. The Universality of Self-Generated Verbal Mediators as a Means of Enhancing Memory Processes. Research Report No. 58.

    ERIC Educational Resources Information Center

    Buium, Nissan; Turnure, James E.

    In a replication of a similar study with American children, 56 normal native Israeli children (5-years-old) were studied to determine the universality of self-generated verbal mediators as a means of enhancing memory processes. Eight Ss, randomly selected, were assigned in each of the following conditions: labeling, sentence generation, listening…

  20. Mobile Thread Task Manager

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin J.

    2013-01-01

    The Mobile Thread Task Manager (MTTM) is being applied to parallelizing existing flight software to understand the benefits and to develop new techniques and architectural concepts for adapting software to multicore architectures. It allocates and load-balances tasks for a group of threads that migrate across processors to improve cache performance. In order to balance-load across threads, the MTTM augments a basic map-reduce strategy to draw jobs from a global queue. In a multicore processor, memory may be "homed" to the cache of a specific processor and must be accessed from that processor. The MTTB architecture wraps access to data with thread management to move threads to the home processor for that data so that the computation follows the data in an attempt to avoid L2 cache misses. Cache homing is also handled by a memory manager that translates identifiers to processor IDs where the data will be homed (according to rules defined by the user). The user can also specify the number of threads and processors separately, which is important for tuning performance for different patterns of computation and memory access. MTTM efficiently processes tasks in parallel on a multiprocessor computer. It also provides an interface to make it easier to adapt existing software to a multiprocessor environment.

  1. Agricultural Influences on Cache Valley, Utah Air Quality During a Wintertime Inversion Episode

    NASA Astrophysics Data System (ADS)

    Silva, P. J.

    2017-12-01

    Several of northern Utah's intermountain valleys are classified as non-attainment for fine particulate matter. Past data indicate that ammonium nitrate is the major contributor to fine particles and that the gas phase ammonia concentrations are among the highest in the United States. During the 2017 Utah Winter Fine Particulate Study, USDA brought a suite of online and real-time measurement methods to sample particulate matter and potential gaseous precursors from agricultural emissions in the Cache Valley. Instruments were co-located at the State of Utah monitoring site in Smithfield, Utah from January 21st through February 12th, 2017. A Scanning mobility particle sizer (SMPS) and aerodynamic particle sizer (APS) acquired size distributions of particles from 10 nm - 10 μm in 5-min intervals. A URG ambient ion monitor (AIM) gave hourly concentrations for gas and particulate ions and a Chromatotec Trsmedor gas chromatograph obtained 10 minute measurements of gaseous sulfur species. High ammonia concentrations were detected at the Smithfield site with concentrations above 100 ppb at times, indicating a significant influence from agriculture at the sampling site. Ammonia is not the only agricultural emission elevated in Cache Valley during winter, as reduced sulfur gas concentrations of up to 20 ppb were also detected. Dimethylsulfide was the major sulfur-containing gaseous species. Analysis indicates that particle growth and particle nucleation events were both observed by the SMPS. Relationships between gas and particulate concentrations and correlations between the two will be discussed.

  2. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  3. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2011-01-11

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  4. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-02-21

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  5. Managing coherence via put/get windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A; Chen, Dong; Coteus, Paul W

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less

  6. Recall from Semantic and Episodic Memory.

    ERIC Educational Resources Information Center

    Gillund, Gary; Perlmutter, Marion

    Although research in episodic recall memory, comparing younger and older adults, favors the younger adults, findings in semantic memory research are less consistent. To examine age differences in semantic and episodic memory recall, 72 young adults (mean age, 20.8) and 72 older adults (mean age 71) completed three memory tests under varied…

  7. Effects of General Medical Health on Alzheimer Progression: the Cache County Dementia Progression Study

    PubMed Central

    Leoutsakos, Jeannie-Marie S.; Han, Dingfen; Mielke, Michelle M.; Forrester, Sarah N.; Tschanz, JoAnn T.; Corcoran, Chris D.; Green, Robert C.; Norton, Maria C.; Welsh-Bohmer, Kathleen A.; Lyketsos, Constantine G.

    2012-01-01

    Background Several observational studies suggested a link between health status and rate of decline among individuals with Alzheimer’s disease (AD). We sought to quantify the relationship in a population-based study of incident AD, and to compare global comorbidity ratings to counts of comorbid conditions and medications as predictors of AD progression. Methods Design Case-only cohort study arising from population-based longitudinal study of memory and aging. Setting Cache County, Utah Participants 335 individuals with incident AD followed for up to 11 years. Measurements Patient descriptors included sex, age, education, dementia duration at baseline, and APOE genotype. Measures of health status made at each visit included the GMHR (General Medical Health Rating), number of comorbid medical conditions, and number of non-psychiatric medications. Dementia outcomes included the Mini-Mental State Exam (MMSE), Clinical Dementia Rating – sum of boxes (CDR-sb), and the Neuropsychiatric Inventory (NPI). Results Health Status tended to fluctuate over time within individuals. None of the baseline medical variables (GMHR, comorbidities, non-psychiatric medications) were associated with differences in rates of decline in longitudinal linear mixed effects models. Over time, low GMHR ratings, but not comorbidities or medications, were associated with poorer outcomes (MMSE: β=−1.07 p=0.01; CDR-sb: β=1.79 p<0.001; NPI: β=4.57 p=0.01) Conclusions Given that time-varying GMHR, but not baseline GMHR, was associated with the outcomes, there is likely a dynamic relationship between medical and cognitive health. GMHR is a more sensitive measure of health than simple counts of comorbidities or medications. Since health status is a potentially modifiable risk factor, further study is warranted. PMID:22687143

  8. Effects of general medical health on Alzheimer's progression: the Cache County Dementia Progression Study.

    PubMed

    Leoutsakos, Jeannie-Marie S; Han, Dingfen; Mielke, Michelle M; Forrester, Sarah N; Tschanz, JoAnn T; Corcoran, Chris D; Green, Robert C; Norton, Maria C; Welsh-Bohmer, Kathleen A; Lyketsos, Constantine G

    2012-10-01

    Several observational studies have suggested a link between health status and rate of decline among individuals with Alzheimer's disease (AD). We sought to quantify the relationship in a population-based study of incident AD, and to compare global comorbidity ratings to counts of comorbid conditions and medications as predictors of AD progression. This was a case-only cohort study arising from a population-based longitudinal study of memory and aging, in Cache County, Utah. Participants comprised 335 individuals with incident AD followed for up to 11 years. Patient descriptors included sex, age, education, dementia duration at baseline, and APOE genotype. Measures of health status made at each visit included the General Medical Health Rating (GMHR), number of comorbid medical conditions, and number of non-psychiatric medications. Dementia outcomes included the Mini-Mental State Examination (MMSE), Clinical Dementia Rating - sum of boxes (CDR-sb), and the Neuropsychiatric Inventory (NPI). Health status tended to fluctuate over time within individuals. None of the baseline medical variables (GMHR, comorbidities, and non-psychiatric medications) was associated with differences in rates of decline in longitudinal linear mixed effects models. Over time, low GMHR ratings, but not comorbidities or medications, were associated with poorer outcomes (MMSE: β = -1.07 p = 0.01; CDR-sb: β = 1.79 p < 0.001; NPI: β = 4.57 p = 0.01). Given that time-varying GMHR, but not baseline GMHR, was associated with the outcomes, it seems likely that there is a dynamic relationship between medical and cognitive health. GMHR is a more sensitive measure of health than simple counts of comorbidities or medications. Since health status is a potentially modifiable risk factor, further study is warranted.

  9. System for simultaneously loading program to master computer memory devices and corresponding slave computer memory devices

    NASA Technical Reports Server (NTRS)

    Hall, William A. (Inventor)

    1993-01-01

    A bus programmable slave module card for use in a computer control system is disclosed which comprises a master computer and one or more slave computer modules interfacing by means of a bus. Each slave module includes its own microprocessor, memory, and control program for acting as a single loop controller. The slave card includes a plurality of memory means (S1, S2...) corresponding to a like plurality of memory devices (C1, C2...) in the master computer, for each slave memory means its own communication lines connectable through the bus with memory communication lines of an associated memory device in the master computer, and a one-way electronic door which is switchable to either a closed condition or a one-way open condition. With the door closed, communication lines between master computer memory (C1, C2...) and slave memory (S1, S2...) are blocked. In the one-way open condition invention, the memory communication lines or each slave memory means (S1, S2...) connect with the memory communication lines of its associated memory device (C1, C2...) in the master computer, and the memory devices (C1, C2...) of the master computer and slave card are electrically parallel such that information seen by the master's memory is also seen by the slave's memory. The slave card is also connectable to a switch for electronically removing the slave microprocessor from the system. With the master computer and the slave card in programming mode relationship, and the slave microprocessor electronically removed from the system, loading a program in the memory devices (C1, C2...) of the master accomplishes a parallel loading into the memory devices (S1, S2...) of the slave.

  10. Programmable stream prefetch with resource optimization

    DOEpatents

    Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan

    2013-01-08

    A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.

  11. Controlled replication: reduce the capacity occupied by redundant replicas in tiled chip multiprocessors

    NASA Astrophysics Data System (ADS)

    Li, Hao; Xie, Lunguo

    2013-03-01

    The design of cache system for Chip Multiprocessor (CMP) face many challenges because future CMPs will have more cores and greater on-chip cache capacity. There are two base design schemes about L2 cache: private scheme in which each L2 slice is treated as a private L2 cache and shared scheme in which all L2 slices are treated as a large L2 cache shared by all cores. Private caches provide the lowest hit latency but reduce the total effective cache capacity. A shared L2 cache increases the effective cache capacity but has long hit latencies when data is on a remote tile. This paper present a new Controlled Replication (CR) policy to reduce the capacities occupied by redundant shared replicas. the new CR policy increases the effective capacity than victim replication scheme and has lower hit latency than shared scheme. We evaluate the various schemes using full-system simulation of parallel applications. Results show that CR reduces the average memory access latency of shared scheme by an average of 13%, providing better overall performance than victim replication and shared schemes.

  12. Norms for CERAD Constructional Praxis Recall

    PubMed Central

    Fillenbaum, Gerda G.; Burchett, Bruce M.; Unverzagt, Frederick W.; Rexroth, Daniel F.; Welsh-Bohmer, Kathleen

    2012-01-01

    Recall of the 4-item constructional praxis measure was a later addition to the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) neuropsychological battery. Norms for this measure, based on cognitively intact African Americans age ≥70 (Indianapolis-Ibadan Dementia Project, N=372), European American participants age ≥66 (Cache County Study of Memory, Health and Aging, N=507), and European American CERAD clinic controls age ≥50 (N=182), are presented here. Performance varied by site; by sex, education and age (African Americans in Indianapolis); education and age (Cache County European Americans; and only age (CERAD European American controls). Performance declined with increased age, within age with less education, and was poorer for women. Means, standard deviations, and percentiles are presented separately for each sample. PMID:21992077

  13. Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.

    PubMed

    Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima

    2014-10-14

    Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.

  14. Methods for compressible fluid simulation on GPUs using high-order finite differences

    NASA Astrophysics Data System (ADS)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  15. Tough times call for bigger brains

    PubMed Central

    Pravosudov, Vladimir V

    2009-01-01

    Memory is crucial for survival in many animals. Spatial memory in particular is important for food-caching species and may be influenced by selective pressures such as climate. The influence of climate on memory may be facilitated through the hippocampus (Hp), the part of the brain responsible in part for spatial memory. In a recent paper, we conducted the first large-scale test of the relationship between memory, the climate and the brain in a single food-caching species, the black-capped chickadee (Poecile atricapillus). We found that birds from more harsh northern climates had significantly larger hippocampal volumes and more neurons than those from more mild southern latitudes. This work suggests that environmental pressures are capable of influencing specific brain regions, which may result in enhanced memory, and hence survival, in harsh climates. This work gives us a better understanding of how the brain responds to different environments and how animals can adapt to their environment in general. PMID:19641741

  16. An effective write policy for software coherence schemes

    NASA Technical Reports Server (NTRS)

    Chen, Yung-Chin; Veidenbaum, Alexander V.

    1992-01-01

    The authors study the write behavior and evaluate the performance of various write strategies and buffering techniques for a MIN-based multiprocessor system using the simple software coherence scheme. Hit ratios, memory latencies, total execution time, and total write traffic are used as the performance indices. The write-through write-allocate no-fetch cache using a write-back write buffer is shown to have a better performance than both write-through and write-back caches. This type of write buffer is effective in reducing the volume as well as bursts of write traffic. On average, the use of a write-back cache reduces by 60 percent the total write traffic generated by a write-through cache.

  17. Megafloods and Clovis cache at Wenatchee, Washington

    NASA Astrophysics Data System (ADS)

    Waitt, Richard B.

    2016-05-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more-till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  18. Megafloods and Clovis cache at Wenatchee, Washington

    USGS Publications Warehouse

    Waitt, Richard B.

    2016-01-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more—till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  19. Expert Systems on Multiprocessor Architectures. Volume 2. Technical Reports

    DTIC Science & Technology

    1991-06-01

    Report RC 12936 (#58037). IBM T. J. Wartson Reiearch Center. July 1987. � Alan Jay Smith. Cache memories. Coniputing Sitrry., 1.1(3): I.3-5:30...basic-shared is an instrument for ashared memory design. The components panels are processor- qload-scrolling-bar-panel, memory-qload-scrolling-bar-panel

  20. Algorithms for Data Intensive Applications on Intelligent and Smart Memories

    DTIC Science & Technology

    2003-03-01

    editors). Parallel Algorithms and Architectures. North Holland, 1986. [8] P. Diniz . USC ISI, Personal Communication, March, 2001. [9] M. Frigo, C. E ...hierarchy as well as the Translation Lookaside Buer TLB aect the e ectiveness of cache friendly optimizations These penalties vary among...processors and cause large variations in the e ectiveness of cache performance optimizations The area of graph problems is fundamental in a wide variety of

  1. Presettlement Forests of the Black Swamp Area, Cache River,Woodruff County, Arkansas, from Notes of the First Land Survey

    Treesearch

    Thomas L. Foti

    2001-01-01

    Relationships between forest vegetation and soil were reconstructed from field notes of the 1846 Public Land Survey (PLS) along a portion of the Cache River including Black Swamp. Locations of corners were digitized long with species,diameter,and distance from section or quarter-section corners. Trees were grouped for analysis according to occurrence on groups of...

  2. Prospective study of Dietary Approaches to Stop Hypertension- and Mediterranean-style dietary patterns and age-related cognitive change: the Cache County Study on Memory, Health and Aging.

    PubMed

    Wengreen, Heidi; Munger, Ronald G; Cutler, Adele; Quach, Anna; Bowles, Austin; Corcoran, Christopher; Tschanz, Joann T; Norton, Maria C; Welsh-Bohmer, Kathleen A

    2013-11-01

    Healthy dietary patterns may protect against age-related cognitive decline, but results of studies have been inconsistent. We examined associations between Dietary Approaches to Stop Hypertension (DASH)- and Mediterranean-style dietary patterns and age-related cognitive change in a prospective, population-based study. Participants included 3831 men and women ≥65 y of age who were residents of Cache County, UT, in 1995. Cognitive function was assessed by using the Modified Mini-Mental State Examination (3MS) ≤4 times over 11 y. Diet-adherence scores were computed by summing across the energy-adjusted rank-order of individual food and nutrient components and categorizing participants into quintiles of the distribution of the diet accordance score. Mixed-effects repeated-measures models were used to examine 3MS scores over time across increasing quintiles of dietary accordance scores and individual food components that comprised each score. The range of rank-order DASH and Mediterranean diet scores was 1661-25,596 and 2407-26,947, respectively. Higher DASH and Mediterranean diet scores were associated with higher average 3MS scores. People in quintile 5 of DASH averaged 0.97 points higher than those in quintile 1 (P = 0.001). The corresponding difference for Mediterranean quintiles was 0.94 (P = 0.001). These differences were consistent over 11 y. Higher intakes of whole grains and nuts and legumes were also associated with higher average 3MS scores [mean quintile 5 compared with 1 differences: 1.19 (P < 0.001), 1.22 (P < 0.001), respectively]. Higher levels of accordance with both the DASH and Mediterranean dietary patterns were associated with consistently higher levels of cognitive function in elderly men and women over an 11-y period. Whole grains and nuts and legumes were positively associated with higher cognitive functions and may be core neuroprotective foods common to various healthy plant-centered diets around the globe.

  3. Improvement of autobiographic memory recovery by means of sad music in Alzheimer's Disease type dementia.

    PubMed

    Meilán García, Juan José; Iodice, Rosario; Carro, Juan; Sánchez, José Antonio; Palmero, Francisco; Mateos, Ana María

    2012-06-01

    Autobiographic memory undergoes progressive deterioration during the evolution of Alzheimer's disease (AD). The aim of this study was to analyze mechanisms which facilitate recovery of autobiographic memories. We used a repeatedly employed mechanism, music, with the addition of an emotional factor. Autobiographic memory provoked by a variety of sounds (music which was happy, sad, lacking emotion, ambient noise in a coffee bar and no sound) was analyzed in a sample of 25 patients with AD. Emotional music, especially sad music for remote memories, was found to be the most effective kind for recall of autobiographic experiences. The factor evoking the memory is not the music itself, but rather the emotion associated with it, and is useful for semantic rather than episodic memory.

  4. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    PubMed

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  5. GPU-Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    NASA Astrophysics Data System (ADS)

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  6. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  7. Using ecology to guide the study of cognitive and neural mechanisms of different aspects of spatial memory in food-hoarding animals.

    PubMed

    Smulders, Tom V; Gould, Kristy L; Leaver, Lisa A

    2010-03-27

    Understanding the survival value of behaviour does not tell us how the mechanisms that control this behaviour work. Nevertheless, understanding survival value can guide the study of these mechanisms. In this paper, we apply this principle to understanding the cognitive mechanisms that support cache retrieval in scatter-hoarding animals. We believe it is too simplistic to predict that all scatter-hoarding animals will outperform non-hoarding animals on all tests of spatial memory. Instead, we argue that we should look at the detailed ecology and natural history of each species. This understanding of natural history then allows us to make predictions about which aspects of spatial memory should be better in which species. We use the natural hoarding behaviour of the three best-studied groups of scatter-hoarding animals to make predictions about three aspects of their spatial memory: duration, capacity and spatial resolution, and we test these predictions against the existing literature. Having laid out how ecology and natural history can be used to predict detailed cognitive abilities, we then suggest using this approach to guide the study of the neural basis of these abilities. We believe that this complementary approach will reveal aspects of memory processing that would otherwise be difficult to discover.

  8. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  9. Meteorological and environmental aspects of one of the worst national air pollution episodes (January, 2004) in Logan, Cache Valley, Utah, USA

    NASA Astrophysics Data System (ADS)

    Malek, Esmaiel; Davis, Tess; Martin, Randal S.; Silva, Philip J.

    2006-02-01

    Logan, Utah, USA, had the nation's worst air pollution on 15 January, 2004. The high concentration of PM 2.5 (particulates smaller than 2.5 μm in diameter) in the air resulted from geographical, meteorological, and environmental aspects of Cache Valley. A strong inversion (increase of temperature with height) and light precipitation and/or wind were the major causes for trapping pollutants in the air. Other meteorological factors enhancing the inversion were: the prolonged high atmospheric surface pressure, a snow-covered surface which plunged temperatures to as low as - 23.6 °C on January 23rd and high reflection of solar radiation (up to about 80%), which caused less solar radiation absorption during the day throughout the most part of January 2004. Among non-meteorological factors are Cache Valley's small-basin geographical structure which traps air, with no big body of water to help the air circulation (as a result of differential heating and cooling rates for land and water), motor vehicle emissions, and existence of excess ammonia gas as a byproduct of livestock manure and urine. Concentration of PM 2.5 was monitored in downtown Logan. On January 15, 2004, the 24-h, filter-based concentration reached about 132.5 μg per cubic meter of air, an astonishingly high value compared to the values of 65 μg m - 3 and over, indicating a health alert for everyone. These tiny particles in the air have an enormous impact on health, aggravating heart and lung disease, triggering asthma and even death. The causes of this inversion and some suggestions to alleviate the wintertime particle concentration in Cache Valley will be addressed in this article.

  10. The Development of Young Children's Memory Strategies: First Findings from the Wurzburg Longitudinal Memory Study

    ERIC Educational Resources Information Center

    Schneider, Wolfgang; Kron, Veronika; Hunnerkopf, Michael; Krajewski, Kristin

    2004-01-01

    This article reports the first findings of the Wurzburg Longitudinal Memory Study, which focuses on children's verbal memory development, particularly the acquisition of memory strategies. At the beginning of the study, 100 kindergarten children (mean age 6-and-1/2 years) were tested on various memory measures, including sort--recall, text recall,…

  11. A study of the relationship between the performance and dependability of a fault-tolerant computer

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated.

  12. Accelerating a Particle-in-Cell Simulation Using a Hybrid Counting Sort

    NASA Astrophysics Data System (ADS)

    Bowers, K. J.

    2001-11-01

    In this article, performance limitations of the particle advance in a particle-in-cell (PIC) simulation are discussed. It is shown that the memory subsystem and cache-thrashing severely limit the speed of such simulations. Methods to implement a PIC simulation under such conditions are explored. An algorithm based on a counting sort is developed which effectively eliminates PIC simulation cache thrashing. Sustained performance gains of 40 to 70 percent are measured on commodity workstations for a minimal 2d2v electrostatic PIC simulation. More complete simulations are expected to have even better results as larger simulations are usually even more memory subsystem limited.

  13. DESTINY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-03-10

    DESTINY is a comprehensive tool for modeling 3D and 2D cache designs using SRAM,embedded DRAM (eDRAM), spin transfer torque RAM (STT-RAM), resistive RAM (ReRAM), and phase change RAM (PCN). In its purpose, it is similar to CACTI, CACTI-3DD or NVSim. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g. latency, area or energy-delay product) for agiven memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a given optimization target, etc. DESTINY has been validated against several cache prototypes. DESTINY is expected to boost studies ofmore » next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers.« less

  14. Parallelization Issues and Particle-In Codes.

    NASA Astrophysics Data System (ADS)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  15. Life memories and the ability to act: the meaning of autonomy and participation for older people when living with chronic illness.

    PubMed

    Hedman, Maria; Pöder, Ulrika; Mamhidir, Anna-Greta; Nilsson, Annika; Kristofferzon, Marja-Leena; Häggström, Elisabeth

    2015-12-01

    There is a lack of knowledge about how older people living with chronic illness describe the meaning of autonomy and participation, indicating a risk for reduced autonomy and participation in their everyday life. The purpose of this study was to describe the meaning of autonomy and participation among older people living with chronic illness in accordance with their lived experience. The design was descriptive with a phenomenological approach guided by Giorgi's descriptive phenomenological psychological method. Purposive sampling was used, and 16 older people living with chronic illness who lived in an ordinary home participated in individual interviews. The findings showed that the meaning of autonomy and participation among the older people emerged when it was challenged and evoked emotional considerations of the lived experience of having a chronic illness. It involved living a life apart, yet still being someone who is able, trustworthy and given responsibility--still being seen and acknowledged. The meaning of autonomy and participation was derived through life memories and used by the older people in everyday life for adjustment or adaption to the present life and the future. Our conclusion is that autonomy and participation were considered in relation to older people's life memories in the past, in their present situation and also their future wishes. Ability or disability is of less importance than the meaning of everyday life among older people. We suggest using fewer labels for limitations in everyday life when caring for older people and more use of the phrase 'ability to act' in different ways, based on older people's descriptions of the meaning of autonomy and participation. © 2015 Nordic College of Caring Science.

  16. Stressful life events and cognitive decline in late life: moderation by education and age. The Cache County Study.

    PubMed

    Tschanz, Joann T; Pfister, Roxane; Wanzek, Joseph; Corcoran, Chris; Smith, Ken; Tschanz, Brian T; Steffens, David C; Østbye, Truls; Welsh-Bohmer, Kathleen A; Norton, Maria C

    2013-08-01

    Stressful life events (SLE) have been associated with increased dementia risk, but their association with cognitive decline has been inconsistent. In a longitudinal population-based study of older individuals, we examined the association between SLE and cognitive decline, and the role of potential effect modifiers. A total of 2665 non-demented participants of the Cache County Memory Study completed an SLE questionnaire at Wave 2 and were revisited 4 and 7 years later. The events were represented via several scores: total number, subjective rating (negative, positive, and unexpected), and a weighted summary based on their impact. Cognition was assessed at each visit with the modified Mini-Mental State Exam. General linear models were used to examine the association between SLE scores and cognition. Effect modification by age, education, and APOE genotype was tested. Years of formal education (p = 0.006) modified the effect of number of SLE, and age (p = 0.009) modified the effect of negative SLE on the rate of cognitive decline. Faster decline was observed among those with fewer years of education experiencing more SLE and also among younger participants experiencing more negative SLE. There was no association between other indicators of SLE and cognitive decline. APOE genotype did not modify any of the aforementioned associations. The effects of SLE on cognition in late life are complex and vary by individual factors such as age and education. These results may explain some of the contradictory findings in the literature. Copyright © 2012 John Wiley & Sons, Ltd.

  17. NAS Applications and Advanced Algorithms

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Biswas, Rupak; VanDerWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.

  18. Scalability Analysis of Gleipnir: A Memory Tracing and Profiling Tool, on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos; Wang, Dali

    2013-01-01

    Application performance is hindered by a variety of factors but most notably driven by the well know CPU-memory speed gap (also known as the memory wall). Understanding application s memory behavior is key if we are trying to optimize performance. Understanding application performance properties is facilitated with various performance profiling tools. The scope of profiling tools varies in complexity, ease of deployment, profiling performance, and the detail of profiled information. Specifically, using profiling tools for performance analysis is a common task when optimizing and understanding scientific applications on complex and large scale systems such as Cray s XK7. This papermore » describes the performance characteristics of using Gleipnir, a memory tracing tool, on the Titan Cray XK7 system when instrumenting large applications such as the Community Earth System Model. Gleipnir is a memory tracing tool built as a plug-in tool for the Valgrind instrumentation framework. The goal of Gleipnir is to provide fine-grained trace information. The generated traces are a stream of executed memory transactions mapped to internal structures per process, thread, function, and finally the data structure or variable. Our focus was to expose tool performance characteristics when using Gleipnir with a combination of an external tools such as a cache simulator, Gl CSim, to characterize the tool s overall performance. In this paper we describe our experience with deploying Gleipnir on the Titan Cray XK7 system, report on the tool s ease-of-use, and analyze run-time performance characteristics under various workloads. While all performance aspects are important we mainly focus on I/O characteristics analysis due to the emphasis on the tools output which are trace-files. Moreover, the tool is dependent on the run-time system to provide the necessary infrastructure to expose low level system detail; therefore, we also discuss any theoretical benefits that can be achieved if

  19. Security in the Cache and Forward Architecture for the Next Generation Internet

    NASA Astrophysics Data System (ADS)

    Hadjichristofi, G. C.; Hadjicostis, C. N.; Raychaudhuri, D.

    The future Internet architecture will be comprised predominately of wireless devices. It is evident at this stage that the TCP/IP protocol that was developed decades ago will not properly support the required network functionalities since contemporary communication profiles tend to be data-driven rather than host-based. To address this paradigm shift in data propagation, a next generation architecture has been proposed, the Cache and Forward (CNF) architecture. This research investigates security aspects of this new Internet architecture. More specifically, we discuss content privacy, secure routing, key management and trust management. We identify security weaknesses of this architecture that need to be addressed and we derive security requirements that should guide future research directions. Aspects of the research can be adopted as a step-stone as we build the future Internet.

  20. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  1. A Cache Design to Exploit Structural Locality

    DTIC Science & Technology

    1991-12-01

    memory and secondary storage. Main memory was used to store the instructions and data of an executing pro- gram, while secondary storage held programs ...efficiency of the CPU and faster turnaround of executing programs . In addition to the well known spatial and temporal aspects of locality, Hobart has...identified a third aspect, which he has called structural locality (9). This type of locality is defined as the tendency of an executing program to

  2. Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures

    DOE PAGES

    Fung, J.; Aulwes, R. T.; Bement, M. T.; ...

    2015-07-14

    This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less

  3. Nutritional deficits during early development affect hippocampal structure and spatial memory later in life.

    PubMed

    Pravosudov, Vladimir V; Lavenex, Pierre; Omanska, Alicja

    2005-10-01

    Development rates vary among individuals, often as a result of direct competition for food. Survival of young might depend on their learning abilities, but it remains unclear whether learning abilities are affected by nutrition during development. The authors demonstrated that compared with controls, 1-year-old Western scrub jays (Aphelocoma californica) that experienced nutritional deficits during early posthatching development had smaller hippocampi with fewer neurons and performed worse in a cache recovery task and in a spatial version of an associative learning task. In contrast, performance of nutritionally deprived birds was similar to that of controls in 2 color versions of an associative learning task. These findings suggest that nutritional deficits during early development have long-term consequences for hippocampal structure and spatial memory, which, in turn, are likely to have a strong impact on animals' future fitness.

  4. Examination of long-term visual memorization capacity in the Clark's nutcracker (Nucifraga columbiana).

    PubMed

    Qadri, Muhammad A J; Leonard, Kevin; Cook, Robert G; Kelly, Debbie M

    2018-02-15

    Clark's nutcrackers exhibit remarkable cache recovery behavior, remembering thousands of seed locations over the winter. No direct laboratory test of their visual memory capacity, however, has yet been performed. Here, two nutcrackers were tested in an operant procedure used to measure different species' visual memory capacities. The nutcrackers were incrementally tested with an ever-expanding pool of pictorial stimuli in a two-alternative discrimination task. Each picture was randomly assigned to either a right or a left choice response, forcing the nutcrackers to memorize each picture-response association. The nutcrackers' visual memorization capacity was estimated at a little over 500 pictures, and the testing suggested effects of primacy, recency, and memory decay over time. The size of this long-term visual memory was less than the approximately 800-picture capacity established for pigeons. These results support the hypothesis that nutcrackers' spatial memory is a specialized adaptation tied to their natural history of food-caching and recovery, and not to a larger long-term, general memory capacity. Furthermore, despite millennia of separate and divergent evolution, the mechanisms of visual information retention seem to reflect common memory systems of differing capacities across the different species tested in this design.

  5. Working memory, long-term memory, and medial temporal lobe function

    PubMed Central

    Jeneson, Annette; Squire, Larry R.

    2012-01-01

    Early studies of memory-impaired patients with medial temporal lobe (MTL) damage led to the view that the hippocampus and related MTL structures are involved in the formation of long-term memory and that immediate memory and working memory are independent of these structures. This traditional idea has recently been revisited. Impaired performance in patients with MTL lesions on tasks with short retention intervals, or no retention interval, and neuroimaging findings with similar tasks have been interpreted to mean that the MTL is sometimes needed for working memory and possibly even for visual perception itself. We present a reappraisal of this interpretation. Our main conclusion is that, if the material to be learned exceeds working memory capacity, if the material is difficult to rehearse, or if attention is diverted, performance depends on long-term memory even when the retention interval is brief. This fundamental notion is better captured by the terms subspan memory and supraspan memory than by the terms short-term memory and long-term memory. We propose methods for determining when performance on short-delay tasks must depend on long-term (supraspan) memory and suggest that MTL lesions impair performance only when immediate memory and working memory are insufficient to support performance. In neuroimaging studies, MTL activity during encoding is influenced by the memory load and correlates positively with long-term retention of the material that was presented. The most parsimonious and consistent interpretation of all the data is that subspan memoranda are supported by immediate memory and working memory and are independent of the MTL. PMID:22180053

  6. Autobiographical memory functioning among abused, neglected, and nonmaltreated children: the overgeneral memory effect.

    PubMed

    Valentino, Kristin; Toth, Sheree L; Cicchetti, Dante

    2009-08-01

    This investigation addresses whether there are differences in the form and content of autobiographical memory recall as a function of maltreatment, and examines the roles of self-system functioning and psychopathology in autobiographical memory processes. Autobiographical memory for positive and negative nontraumatic events was evaluated among abused, neglected, and nonmaltreated school-aged children. Abused children's memories were more overgeneral and contained more negative self-representations than did those of the nonmaltreated children. Negative self-representations and depression were significantly related to overgeneral memory, but did not mediate the relation between abuse and overgeneral memory. The meaning of these findings for models of memory and for the development of overgenerality is emphasized. Moreover, the clinical implications of the current research are discussed.

  7. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  8. The Glass Computer

    ERIC Educational Resources Information Center

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  9. Summary and Synthesis of Mercury Studies in the Cache Creek Watershed, California, 2000-01

    USGS Publications Warehouse

    Domagalski, Joseph L.; Slotton, Darell G.; Alpers, Charles N.; Suchanek, Thomas H.; Churchill, Ronald; Bloom, Nicolas; Ayers, Shaun M.; Clinkenbeard, John

    2004-01-01

    This report summarizes the principal findings of the Cache Creek, California, components of a project funded by the CALFED Bay?Delta Program entitled 'An Assessment of Ecological and Human Health Impacts of Mercury in the Bay?Delta Watershed.' A companion report summarizes the key findings of other components of the project based in the San Francisco Bay and the Delta of the Sacramento and San Joaquin Rivers. These summary documents present the more important findings of the various studies in a format intended for a wide audience. For more in-depth, scientific presentation and discussion of the research, a series of detailed technical reports of the integrated mercury studies is available at the following website: .

  10. From the Island of the Blue Dolphins: A unique 19th century cache feature from San Nicolas Island, California

    USGS Publications Warehouse

    Erlandson, Jon M.; Thomas-Barnett, Lisa; Vellanoweth, René L.; Schwartz, Steven J.; Muhs, Daniel R.

    2013-01-01

    A cache feature salvaged from an eroding sea cliff on San Nicolas Island produced two redwood boxes containing more than 200 artifacts of Nicoleño, Native Alaskan, and Euro-American origin. Outside the boxes were four asphaltum-coated baskets, abalone shells, a sandstone dish, and a hafted stone knife. The boxes, made from split redwood planks, contained a variety of artifacts and numerous unmodified bones and teeth from marine mammals, fish, birds, and large land mammals. Nicoleño-style artifacts include 11 knives with redwood handles and stone blades, stone projectile points, steatite ornaments and effigies, a carved stone pipe, abraders and burnishing stones, bird bone whistles, bone and shell pendants, abalone shell dishes, and two unusual barbed shell fishhooks. Artifacts of Native Alaskan style include four bone toggling harpoons, two unilaterally barbed bone harpoon heads, bone harpoon fore-shafts, a ground slate blade, and an adze blade. Objects of Euro-American origin or materials include a brass button, metal harpoon blades, and ten flaked glass bifaces. The contents of the cache feature, dating to the early-to-mid nineteenth century, provide an extraordinary window on a time of European expansion and global economic development that created unique cultural interactions and social transformations.

  11. Neural Correlates of Conceptual Implicit Memory and Their Contamination of Putative Neural Correlates of Explicit Memory

    ERIC Educational Resources Information Center

    Voss, Joel L.; Paller, Ken A.

    2007-01-01

    During episodic recognition tests, meaningful stimuli such as words can engender both conscious retrieval (explicit memory) and facilitated access to meaning that is distinct from the awareness of remembering (conceptual implicit memory). Neuroimaging investigations of one type of memory are frequently subject to the confounding influence of the…

  12. Autobiographical memory functioning among abused, neglected, and nonmaltreated children: The overgeneral memory effect

    PubMed Central

    Valentino, Kristin; Toth, Sheree L.; Cicchetti, Dante

    2012-01-01

    Background This investigation addresses whether there are differences in the form and content of autobiographical memory recall as a function of maltreatment, and examines the roles of self-system functioning and psychopathology in autobiographical memory processes. Methods Autobiographical memory for positive and negative nontraumatic events was evaluated among abused, neglected, and nonmaltreated school-aged children. Results Abused children’s memories were more overgeneral and contained more negative self-representations than did those of the nonmaltreated children. Negative self-representations and depression were significantly related to overgeneral memory, but did not mediate the relation between abuse and overgeneral memory. Conclusions The meaning of these findings for models of memory and for the development of overgenerality is emphasized. Moreover, the clinical implications of the current research are discussed. PMID:19490313

  13. Working memory, short-term memory and reading proficiency in school-age children with cochlear implants.

    PubMed

    Bharadwaj, Sneha V; Maricle, Denise; Green, Laura; Allman, Tamby

    2015-10-01

    The objective of the study was to examine short-term memory and working memory through both visual and auditory tasks in school-age children with cochlear implants. The relationship between the performance on these cognitive skills and reading as well as language outcomes were examined in these children. Ten children between the ages of 7 and 11 years with early-onset bilateral severe-profound hearing loss participated in the study. Auditory and visual short-term memory, auditory and visual working memory subtests and verbal knowledge measures were assessed using the Woodcock Johnson III Tests of Cognitive Abilities, the Wechsler Intelligence Scale for Children-IV Integrated and the Kaufman Assessment Battery for Children II. Reading outcomes were assessed using the Woodcock Reading Mastery Test III. Performance on visual short-term memory and visual working memory measures in children with cochlear implants was within the average range when compared to the normative mean. However, auditory short-term memory and auditory working memory measures were below average when compared to the normative mean. Performance was also below average on all verbal knowledge measures. Regarding reading outcomes, children with cochlear implants scored below average for listening and passage comprehension tasks and these measures were positively correlated to visual short-term memory, visual working memory and auditory short-term memory. Performance on auditory working memory subtests was not related to reading or language outcomes. The children with cochlear implants in this study demonstrated better performance in visual (spatial) working memory and short-term memory skills than in auditory working memory and auditory short-term memory skills. Significant positive relationships were found between visual working memory and reading outcomes. The results of the study provide support for the idea that WM capacity is modality specific in children with hearing loss. Based on these

  14. A hybrid magnetic/complementary metal oxide semiconductor three-context memory bit cell for non-volatile circuit design

    NASA Astrophysics Data System (ADS)

    Jovanović, B.; Brum, R. M.; Torres, L.

    2014-04-01

    After decades of continued scaling to the beat of Moore's law, it now appears that conventional silicon based devices are approaching their physical limits. In today's deep-submicron nodes, a number of short-channel and quantum effects are emerging that affect the manufacturing process, as well as, the functionality of the microelectronic systems-on-chip. Spintronics devices that exploit both the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, are promising solutions to circumvent these scaling threats. Being compatible with the CMOS technology, such devices offer a promising synergy of radiation immunity, infinite endurance, non-volatility, increased density, etc. In this paper, we present a hybrid (magnetic/CMOS) cell that is able to store and process data both electrically and magnetically. The cell is based on perpendicular spin-transfer torque magnetic tunnel junctions (STT-MTJs) and is suitable for use in magnetic random access memories and reprogrammable computing (non-volatile registers, processor cache memories, magnetic field-programmable gate arrays, etc). To demonstrate the potential our hybrid cell, we physically implemented a small hybrid memory block using 45 nm × 45 nm round MTJs for the magnetic part and 28 nm fully depleted silicon on insulator (FD-SOI) technology for the CMOS part. We also report the cells measured performances in terms of area, robustness, read/write speed and energy consumption.

  15. Latent class-derived subgroups of depressive symptoms in a community sample of older adults: the Cache County Study.

    PubMed

    Lee, Chien-Ti; Leoutsakos, Jeannie-Marie; Lyketsos, Constantine G; Steffens, David C; Breitner, John C S; Norton, Maria C

    2012-10-01

    We sought to identify possible subgroups of elders that varied in depressive symptomatology and to examine symptom patterns and health status differences between subgroups. The Cache County memory study is a population-based epidemiological study of dementia with 5092 participants. Depressive symptoms were measured with a modified version of the diagnostic interview schedule-depression. There were 400 nondemented participants who endorsed currently (i.e., in the past 2 weeks) experiencing at least one of the three "gateway" depressive symptoms and then completed a full depression interview. Responses to all nine current depressive symptoms were modeled using the latent class analysis. Three depression subgroups were identified: a significantly depressed subgroup (62%), with the remainder split evenly between a subgroup with low probability of all symptoms (21%), and a subgroup with primarily psychomotor changes, sleep symptoms, and fatigue (17%). Latent class analysis derived subgroups of depressive symptoms and Diagnostic and statistical manual of mental disorders, fourth edition depression diagnostic group were nonredundant. Age, gender, education, marital status, early or late onset, number of episodes, current episode duration, and functional status were not significant predictors of depression subgroup. The first subgroup was more likely to be recently bereaved and had less physical health problems, whereas the third subgroup were less likely to be using antidepressants compared with the second subgroup. There are distinct subgroups of depressed elders, which are not redundant with the Diagnostic and statistical manual of mental disorders, fourth edition classification scheme, offering an alternative diagnostic approach to clinicians and researchers. Future work will examine whether these depressive symptom profiles are predictive of incident dementia and earlier mortality. Copyright © 2011 John Wiley & Sons, Ltd.

  16. DARPA Status Report - November 1988

    DTIC Science & Technology

    1988-11-01

    style used in the applic4#ons reference to that block was by processor j. where j It. We was influenced by it. MACH is a multiprocessor operating S call...it can be order they occurred. However. the exact time at which the treated specially in memory management , and so most of the reference wa, made is...on cache consistency performance, sophisti- peak can be explained as clinging references that occur when cated cache management schemes that take

  17. Hydrology of Cache Valley, Cache County, Utah, and adjacent part of Idaho, with emphasis on simulation of ground-water flow

    USGS Publications Warehouse

    Kariya, Kim A.; Roark, D. Michael; Hanson, Karen M.

    1994-01-01

    A hydrologic investigation of Cache Valley was done to better understand the ground-water system in unconsolidated basin-fill deposits and the interaction between ground water and surface water. Ground-water recharge occurs by infiltration of precipitation and unconsumed irrigation water, seepage from canals and streams, and subsurface inflow from adjacent consolidated rock and adjacent unconsolidated basin-fill deposit ground-water systems. Ground-water discharge occurs as seepage to streams and reservoirs, spring discharge, evapotranspiration, and withdrawal from wells.Water levels declined during 1984-90. Less-than-average precipitation during 1987-90 and increased pumping from irrigation and public-supply wells contributed to the declines.A ground-water-flow model was used to simulate flow in the unconsolidated basin-fill deposits. Data primarily from 1969 were used to calibrate the model to steady-state conditions. Transient-state calibration was done by simulating ground-water conditions on a yearly basis for 1982-90.A hypothetical simulation in which the dry conditions of 1990 were continued for 5 years projected an average lO-foot water-level decline between Richmond and Hyrum. When increased pumpage was simulated by adding three well fields, each pumping 10 cubic feet per second, in the Logan, Smithfield, and College Ward areas, water-level declines greater than 10 feet were projected in most of the southeastern part of the valley and discharge from springs and seepage to streams and reservoirs decreased.

  18. Working memory binding and episodic memory formation in aging, mild cognitive impairment, and Alzheimer's dementia.

    PubMed

    van Geldorp, Bonnie; Heringa, Sophie M; van den Berg, Esther; Olde Rikkert, Marcel G M; Biessels, Geert Jan; Kessels, Roy P C

    2015-01-01

    Recent studies indicate that in both normal and pathological aging working memory (WM) performance deteriorates, especially when associations have to be maintained. However, most studies typically do not assess the relationship between WM and episodic memory formation. In the present study, we examined WM and episodic memory formation in normal aging and in patients with early Alzheimer's disease (mild cognitive impairment, MCI; and Alzheimer's dementia, AD). In the first study, 26 young adults (mean age 29.6 years) were compared to 18 middle-aged adults (mean age 52.2 years) and 25 older adults (mean age 72.8 years). We used an associative delayed-match-to-sample WM task, which requires participants to maintain two pairs of faces and houses presented on a computer screen for short (3 s) or long (6 s) maintenance intervals. After the WM task, an unexpected subsequent associative memory task was administered (two-alternative forced choice). In the second study, 27 patients with AD and 19 patients with MCI were compared to 25 older controls, using the same paradigm as that in Experiment 1. Older adults performed worse than both middle-aged and young adults. No effect of delay was observed in the healthy adults, and pairs that were processed during long maintenance intervals were not better remembered in the subsequent memory task. In the MCI and AD patients, longer maintenance intervals hampered the task performance. Also, both patient groups performed significantly worse than controls on the episodic memory task as well as the associative WM task. Aging and AD present with a decline in WM binding, a finding that extends similar results in episodic memory. Longer delays in the WM task did not affect episodic memory formation. We conclude that WM deficits are found when WM capacity is exceeded, which may occur during associative processing.

  19. Addiction memory as a specific, individually learned memory imprint.

    PubMed

    Böning, J

    2009-05-01

    The construct of "addiction memory" (AM) and its importance for relapse occurrence has been the subject of discussion for the past 30 years. Neurobiological findings from "social neuroscience" and biopsychological learning theory, in conjunction with construct-valid behavioral pharmacological animal models, can now also provide general confirmation of addiction memory as a pathomorphological correlate of addiction disorders. Under multifactorial influences, experience-driven neuronal learning and memory processes of emotional and cognitive processing patterns in the specific individual "set" and "setting" play an especially pivotal role in this connection. From a neuropsychological perspective, the episodic (biographical) memory, located at the highest hierarchical level, is of central importance for the formation of the AM in certain structural and functional areas of the brain and neuronal networks. Within this context, neuronal learning and conditioning processes take place more or less unconsciously and automatically in the preceding long-term-memory systems (in particular priming and perceptual memory). They then regulate the individually programmed addiction behavior implicitly and thus subsequently stand for facilitated recollection of corresponding, previously stored cues or context situations. This explains why it is so difficult to treat an addiction memory, which is embedded above all in the episodic memory, from the molecular carrier level via the neuronal pattern level through to the psychological meaning level, and has thus meanwhile become a component of personality.

  20. Visual landmark-directed scatter-hoarding of Siberian chipmunks Tamias sibiricus.

    PubMed

    Zhang, Dongyuan; Li, Jia; Wang, Zhenyu; Yi, Xianfeng

    2016-05-01

    Spatial memory of cached food items plays an important role in cache recovery by scatter-hoarding animals. However, whether scatter-hoarding animals intentionally select cache sites with respect to visual landmarks in the environment and then rely on them to recover their cached seeds for later use has not been extensively explored. Furthermore, there is a lack of evidence on whether there are sex differences in visual landmark-based food-hoarding behaviors in small rodents even though male and female animals exhibit different spatial abilities. In the present study, we used a scatter-hoarding animal, the Siberian chipmunk, Tamias sibiricus to explore these questions in semi-natural enclosures. Our results showed that T. sibiricus preferred to establish caches in the shallow pits labeled with visual landmarks (branches of Pinus sylvestris, leaves of Athyrium brevifrons and PVC tubes). In addition, visual landmarks of P. sylvestris facilitated cache recovery by T. sibiricus. We also found significant sex differences in visual landmark-based food-hoarding strategies in Siberian chipmunks. Males, rather than females, chipmunks tended to establish their caches with respect to the visual landmarks. Our studies show that T. sibiricus rely on visual landmarks to establish and recover their caches, and that sex differences exist in visual landmark-based food hoarding in Siberian chipmunks. © 2015 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  1. Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei

    1991-01-01

    The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/386 DX memory hierachy appears to be the most beneficial change to the current DMS design at this time.

  2. Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei

    1991-01-01

    The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/387 DX memory hierarchy appears to be the most beneficial change to the current DMS design at this time.

  3. Short-term memory to long-term memory transition in a nanoscale memristor.

    PubMed

    Chang, Ting; Jo, Sung-Hyun; Lu, Wei

    2011-09-27

    "Memory" is an essential building block in learning and decision-making in biological systems. Unlike modern semiconductor memory devices, needless to say, human memory is by no means eternal. Yet, forgetfulness is not always a disadvantage since it releases memory storage for more important or more frequently accessed pieces of information and is thought to be necessary for individuals to adapt to new environments. Eventually, only memories that are of significance are transformed from short-term memory into long-term memory through repeated stimulation. In this study, we show experimentally that the retention loss in a nanoscale memristor device bears striking resemblance to memory loss in biological systems. By stimulating the memristor with repeated voltage pulses, we observe an effect analogous to memory transition in biological systems with much improved retention time accompanied by additional structural changes in the memristor. We verify that not only the shape or the total number of stimuli is influential, but also the time interval between stimulation pulses (i.e., the stimulation rate) plays a crucial role in determining the effectiveness of the transition. The memory enhancement and transition of the memristor device was explained from the microscopic picture of impurity redistribution and can be qualitatively described by the same equations governing biological memories. © 2011 American Chemical Society

  4. A VLSI VAX chip set

    NASA Astrophysics Data System (ADS)

    Johnson, W. N.; Herrick, W. V.; Grundmann, W. J.

    1984-10-01

    For the first time, VLSI technology is used to compress the full functinality and comparable performance of the VAX 11/780 super-minicomputer into a 1.2 M transistor microprocessor chip set. There was no subsetting of the 304 instruction set and the 17 data types, nor reduction in hardware support for the 4 Gbyte virtual memory management architecture. The chipset supports an integral 8 kbyte memory cache, a 13.3 Mbyte/s system bus, and sophisticated multiprocessing. High performance is achieved through microcode optimizations afforded by the large control store, tightly coupled address and data caches, the use of internal and external 32 bit datapaths, the extensive aplication of both microlevel and macrolevel pipelining, and the use of specialized hardware assists.

  5. Prospective study of Dietary Approaches to Stop Hypertension– and Mediterranean-style dietary patterns and age-related cognitive change: the Cache County Study on Memory, Health and Aging123

    PubMed Central

    Munger, Ronald G; Cutler, Adele; Quach, Anna; Bowles, Austin; Corcoran, Christopher; Tschanz, JoAnn T; Norton, Maria C; Welsh-Bohmer, Kathleen A

    2013-01-01

    Background: Healthy dietary patterns may protect against age-related cognitive decline, but results of studies have been inconsistent. Objective: We examined associations between Dietary Approaches to Stop Hypertension (DASH)– and Mediterranean-style dietary patterns and age-related cognitive change in a prospective, population-based study. Design: Participants included 3831 men and women ≥65 y of age who were residents of Cache County, UT, in 1995. Cognitive function was assessed by using the Modified Mini-Mental State Examination (3MS) ≤4 times over 11 y. Diet-adherence scores were computed by summing across the energy-adjusted rank-order of individual food and nutrient components and categorizing participants into quintiles of the distribution of the diet accordance score. Mixed-effects repeated-measures models were used to examine 3MS scores over time across increasing quintiles of dietary accordance scores and individual food components that comprised each score. Results: The range of rank-order DASH and Mediterranean diet scores was 1661–25,596 and 2407–26,947, respectively. Higher DASH and Mediterranean diet scores were associated with higher average 3MS scores. People in quintile 5 of DASH averaged 0.97 points higher than those in quintile 1 (P = 0.001). The corresponding difference for Mediterranean quintiles was 0.94 (P = 0.001). These differences were consistent over 11 y. Higher intakes of whole grains and nuts and legumes were also associated with higher average 3MS scores [mean quintile 5 compared with 1 differences: 1.19 (P < 0.001), 1.22 (P < 0.001), respectively]. Conclusions: Higher levels of accordance with both the DASH and Mediterranean dietary patterns were associated with consistently higher levels of cognitive function in elderly men and women over an 11-y period. Whole grains and nuts and legumes were positively associated with higher cognitive functions and may be core neuroprotective foods common to various healthy plant

  6. [Memory and brain--neurobiological correlates of memory disturbances].

    PubMed

    Calabrese, P; Markowitsch, H J

    2003-04-01

    A differentiation of memory is possible on the basis of chronological and contents-related aspects. Furthermore, it is possible to make process-specific subdivisions (encoding, transfer, consolidation, retrieval). The time-related division on the one hand refers to the general differentiation into short-term and long-term memory, and, on the other, to that between anterograde and retrograde memory ("new" and "old memory"; measured from a given time point, usually that when brain damage occurred). Anterograde memory means the successful encoding and storing of new information; retrograde the ability to retrieve successfully acquired and/or stored information. On the contents-based level, memory can be divided into five basic long-term systems--episodic memory, the knowledge system, perceptual, procedural and the priming form of memory. Neural correlates for these divisions are discussed with special emphasis of the episodic and the knowledge systems, based both on normal individuals and brain-damaged subjects. It is argued that structures of the limbic system are important for encoding of information and for its transfer into long-term memory. For this, two independent, but interacting memory circuits are proposed--one of them controlling and integrating primarily the emotional, and the other primarily the cognitive components of newly incoming information. For information storage principally neocortical structures are regarded as important and for the recall of information from the episodic and semantic memory systems the combined action of portions of prefrontal and anterior temporal regions is regarded as essential. Within this fronto-temporal agglomerate, a moderate hemispheric-specificity is assumed to exist with the right-hemispheric combination being mainly engaged in episodic memory retrieval and the left-hemispheric in that of semantic information. Evidence for this specialization comes from the results from focally brain-damaged patients as well as from

  7. Gender differences in navigational memory: pilots vs. nonpilots.

    PubMed

    Verde, Paola; Piccardi, Laura; Bianchini, Filippo; Guariglia, Cecilia; Carrozzo, Paolo; Morgagni, Fabio; Boccia, Maddalena; Di Fiore, Giacomo; Tomao, Enrico

    2015-02-01

    The coding of space as near and far is not only determined by arm-reaching distance, but is also dependent on how the brain represents the extension of the body space. Recent reports suggest that the dissociation between reaching and navigational space is not limited to perception and action but also involves memory systems. It has been reported that gender differences emerged only in adverse learning conditions that required strong spatial ability. In this study we investigated navigational versus reaching memory in air force pilots and a control group without flight experience. We took into account temporal duration (working memory and long-term memory) and focused on working memory, which is considered critical in the gender differences literature. We found no gender effects or flight hour effects in pilots but observed gender effects in working memory (but not in learning and delayed recall) in the nonpilot population (Women's mean = 5.33; SD= 0.90; Men's mean = 5.54; SD= 0.90). We also observed a difference between pilots and nonpilots in the maintenance of on-line reaching information: pilots (mean = 5.85; SD=0.76) were more efficient than nonpilots (mean = 5.21; SD=0.83) and managed this type of information similarly to that concerning navigational space. In the navigational learning phase they also showed better navigational memory (mean = 137.83; SD=5.81) than nonpilots (mean = 126.96; SD=15.81) and were significantly more proficient than the latter group. There is no gender difference in a population of pilots in terms of navigational abilities, while it emerges in a control group without flight experience. We found also that pilots performed better than nonpilots. This study suggests that once selected, male and female pilots do not differ from each other in visuo-spatial abilities and spatial navigation.

  8. A balanced memory network.

    PubMed

    Roudi, Yasser; Latham, Peter E

    2007-09-01

    A fundamental problem in neuroscience is understanding how working memory--the ability to store information at intermediate timescales, like tens of seconds--is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.

  9. Memory, Literacy, and Invention: Reimagining the Canon of Memory for the Writing Classroom

    ERIC Educational Resources Information Center

    Ryan, Kathleen J.

    2004-01-01

    This article challenges the assumption that the canon of memory means memorization and transcription, and, as a result, has little relevance for the writing classroom. An examination of the canon's historical legacy and its relationships to literacy and invention open a space for redefining the canon of memory as "rememoried knowing." In brief,…

  10. Determinants of seed removal distance by scatter-hoarding rodents in deciduous forests.

    PubMed

    Moore, Jeffrey E; McEuen, Amy B; Swihart, Robert K; Contreras, Thomas A; Steele, Michael A

    2007-10-01

    Scatter-hoarding rodents should space food caches to maximize cache recovery rate (to minimize loss to pilferers) relative to the energetic cost of carrying food items greater distances. Optimization models of cache spacing make two predictions. First, spacing of caches should be greater for food items with greater energy content. Second, the mean distance between caches should increase with food abundance. However, the latter prediction fails to account for the effect of food abundance on the behavior of potential pilferers or on the ability of caching individuals to acquire food by means other than recovering their own caches. When considering these factors, shorter cache distances may be predicted in conditions of higher food abundance. We predicted that seed caching distances would be greater for food items of higher energy content and during lower ambient food abundance and that the effect of seed type on cache distance variation would be lower during higher food abundance. We recorded distances moved for 8636 seeds of five seed types at 15 locations in three forested sites in Pennsylvania, USA, and 29 forest fragments in Indiana, U.S.A., across five different years. Seed production was poor in three years and high in two years. Consistent with previous studies, seeds with greater energy content were moved farther than less profitable food items. Seeds were dispersed less far in seed-rich years than in seed-poor years, contrary to predictions of conventional models. Interactions were important, with seed type effects more evident in seed-poor years. These results suggest that, when food is superabundant, optimal cache distances are more strongly determined by minimizing energy cost of caching than by minimizing pilfering rates and that cache loss rates may be more strongly density-dependent in times of low seed abundance.

  11. An integrated GIS/remote sensing data base in North Cache soil conservation district, Utah: A pilot project for the Utah Department of Agriculture's RIMS (Resource Inventory and Monitoring System)

    NASA Technical Reports Server (NTRS)

    Wheeler, D. J.; Ridd, M. K.; Merola, J. A.

    1984-01-01

    A basic geographic information system (GIS) for the North Cache Soil Conservation District (SCD) was sought for selected resource problems. Since the resource management issues in the North Cache SCD are very complex, it is not feasible in the initial phase to generate all the physical, socioeconomic, and political baseline data needed for resolving all management issues. A selection of critical varables becomes essential. Thus, there are foud specific objectives: (1) assess resource management needs and determine which resource factors ae most fundamental for building a beginning data base; (2) evaluate the variety of data gathering and analysis techniques for the resource factors selected; (3) incorporate the resulting data into a useful and efficient digital data base; and (4) demonstrate the application of the data base to selected real world resoource management issues.

  12. What do you mean "drunk"? Convergent validation of multiple methods of mapping alcohol expectancy memory networks.

    PubMed

    Reich, Richard R; Ariel, Idan; Darkes, Jack; Goldman, Mark S

    2012-09-01

    The configuration and activation of memory networks have been theorized as mechanisms that underlie the often observed link between alcohol expectancies and drinking. A key component of this network is the expectancy "drunk." The memory network configuration of "drunk" was mapped by using cluster analysis of data gathered from the paired-similarities task (PST) and the Alcohol Expectancy Multi-Axial Assessment (AEMAX). A third task, the free associates task (FA), assessed participants' strongest alcohol expectancy associates and was used as a validity check for the cluster analyses. Six hundred forty-seven 18-19-year-olds completed these measures and a measure of alcohol consumption at baseline assessment for a 5-year longitudinal study. For both the PST and AEMAX, "drunk" clustered with mainly negative and sedating effects (e.g., "sick," "dizzy," "sleepy") in lighter drinkers and with more positive and arousing effects (e.g., "happy," "horny," "outgoing") in heavier drinkers, showing that the cognitive organization of expectancies reflected drinker type (and might influence the choice to drink). Consistent with the cluster analyses, in participants who gave "drunk" as an FA response, heavier drinkers rated the word as more positive and arousing than lighter drinkers. Additionally, gender did not account for the observed drinker-type differences. These results support the notion that for some emerging adults, drinking may be linked to what they mean by the word "drunk." PsycINFO Database Record (c) 2012 APA, all rights reserved.

  13. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  14. Heap/stack guard pages using a wakeup unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gooding, Thomas M; Satterfield, David L; Steinmacher-Burow, Burkhard

    A method and system for providing a memory access check on a processor including the steps of detecting accesses to a memory device including level-1 cache using a wakeup unit. The method includes invalidating level-1 cache ranges corresponding to a guard page, and configuring a plurality of wakeup address compare (WAC) registers to allow access to selected WAC registers. The method selects one of the plurality of WAC registers, and sets up a WAC register related to the guard page. The method configures the wakeup unit to interrupt on access of the selected WAC register. The method detects access ofmore » the memory device using the wakeup unit when a guard page is violated. The method generates an interrupt to the core using the wakeup unit, and determines the source of the interrupt. The method detects the activated WAC registers assigned to the violated guard page, and initiates a response.« less

  15. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  16. Magnetic vortex racetrack memory

    NASA Astrophysics Data System (ADS)

    Geng, Liwei D.; Jin, Yongmei M.

    2017-02-01

    We report a new type of racetrack memory based on current-controlled movement of magnetic vortices in magnetic nanowires with rectangular cross-section and weak perpendicular anisotropy. Data are stored through the core polarity of vortices and each vortex carries a data bit. Besides high density, non-volatility, fast data access, and low power as offered by domain wall racetrack memory, magnetic vortex racetrack memory has additional advantages of no need for constrictions to define data bits, changeable information density, adjustable current magnitude for data propagation, and versatile means of ultrafast vortex core switching. By using micromagnetic simulations, current-controlled motion of magnetic vortices in cobalt nanowire is demonstrated for racetrack memory applications.

  17. Memory for syntax despite amnesia.

    PubMed

    Ferreira, Victor S; Bock, Kathryn; Wilson, Michael P; Cohen, Neal J

    2008-09-01

    Syntactic persistence is a tendency for speakers to reproduce sentence structures independently of accompanying meanings, words, or sounds. The memory mechanisms behind syntactic persistence are not fully understood. Although some properties of syntactic persistence suggest a role for procedural memory, current evidence suggests that procedural memory (unlike declarative memory) does not maintain the abstract, relational features that are inherent to syntactic structures. In a study evaluating the contribution of procedural memory to syntactic persistence, patients with anterograde amnesia and matched control speakers reproduced prime sentences with different syntactic structures; reproduced 0, 1, 6, or 10 neutral sentences; then spontaneously described pictures that elicited the primed structures; and finally made recognition judgments for the prime sentences. Amnesic and control speakers showed significant and equivalent syntactic persistence, despite the amnesic speakers' profoundly impaired recognition memory for the primes. Thus, syntax is maintained by procedural-memory mechanisms. This result reveals that procedural memory is capable of supporting abstract, relational knowledge.

  18. Programmable control means for providing safe and controlled medication infusion

    NASA Technical Reports Server (NTRS)

    Fischell, Robert E. (Inventor)

    1988-01-01

    An implantable programmable infusion pump (IPIP) is disclosed and generally includes: a fluid reservoir filled with selected medication; a pump for causing a precise volumetric dosage of medication to be withdrawn from the reservoir and delivered to the appropriate site within the body; and, a control means for actuating the pump in a safe and programmable manner. The control means includes a microprocessor, a permanent memory containing a series of fixed software instructions, and a memory for storing prescription schedules, dosage limits and other data. The microprocessor actuates the pump in accordance with programmable prescription parameters and dosage limits stored in the memory. A communication link allows the control means to be remotely programmed. The control means incorporates a running integral dosage limit and other safety features which prevent an inadvertent or intentional medication overdose. The control means also monitors the pump and fluid handling system and provides an alert if any improper or potentially unsafe operation is detected.

  19. Data Movement Dominates: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, Bruce L.

    follow-on, the Cray Y-MP, in the 1990s) that could one afford to build ever- larger main memories out of SRAM—the reasoning for moving to DRAM was that an appropriately designed memory hierarchy, built of DRAM as main memory and SRAM as a cache, would approach the performance of SRAM, at the price-per-bit of DRAM [Mashey 1999]. Today it is quite clear that, were one to build an entire multi-gigabyte main memory out of SRAM instead of DRAM, one could improve the performance of almost any computer system by up to an order of magnitude—but this option is not even considered, because to build that system would be prohibitively expensive. It is now time to revisit the same design choice in the context of modern technologies and modern systems. For reasons both technical and economic, we can no longer afford to build ever-larger main memory systems out of DRAM. Flash memory, on the other hand, is significantly cheaper and denser than DRAM and therefore should take its place. While it is true that flash is significantly slower than DRAM, one can afford to build much larger main memories out of flash than out of DRAM, and we show that an appropriately designed memory hierarchy, built of flash as main memory and DRAM as a cache, will approach the performance of DRAM, at the price-per-bit of flash. In our studies as part of this project, we have investigated Non-Volatile Main Memory (NVMM), a new main-memory architecture for large-scale computing systems, one that is specifically designed to address the weaknesses described previously. In particular, it provides the following features: non-volatility: The bulk of the storage is comprised of NAND flash, and in this organization DRAM is used only as a cache, not as main memory. Furthermore, the flash is journaled, which means that operations such as checkpoint/restore are already built into the system. 1+ terabytes of storage per socket: SSDs and DRAM DIMMs have roughly the same form factor (several square inches of PCB surface

  20. WriteShield: A Pseudo Thin Client for Prevention of Information Leakage

    NASA Astrophysics Data System (ADS)

    Kirihata, Yasuhiro; Sameshima, Yoshiki; Onoyama, Takashi; Komoda, Norihisa

    While thin-client systems are diffusing as an effective security method in enterprises and organizations, there is a new approach called pseudo thin-client system. In this system, local disks of clients are write-protected and user data is forced to save on the central file server to realize the same security effect of conventional thin-client systems. Since it takes purely the software-based simple approach, it does not require the hardware enhancement of network and servers to reduce the installation cost. However there are several problems such as no write control to external media, memory depletion possibility, and lower security because of the exceptional write permission to the system processes. In this paper, we propose WriteShield, a pseudo thin-client system which solves these issues. In this system, the local disks are write-protected with volume filter driver and it has a virtual cache mechanism to extend the memory cache size for the write protection. This paper presents design and implementation details of WriteShield. Besides we describe the security analysis and simulation evaluation of paging algorithms for virtual cache mechanism and measure the disk I/O performance to verify its feasibility in the actual environment.

  1. Autopsy-confirmed Alzheimer's disease versus clinically diagnosed Alzheimer's disease in the Cache County Study on Memory and Aging: a comparison of quantitative MRI and neuropsychological findings.

    PubMed

    Fearing, Michael A; Bigler, Erin D; Norton, Maria; Tschanz, Jo Ann; Hulette, Christine; Leslie, Carol; Welsh-Bohmer, Kathleen

    2007-07-01

    Atrophy of specific, regional, and generalized brain structures occurs as a result of the Alzheimer's disease (AD) process. Comparing AD patients with histopathological confirmation of the disease at autopsy to those without autopsy but who were clinically diagnosed using the same antemortem criteria will provide further evidence of the utility and accuracy of neuropsychological assessments at the time of diagnosis, as well as the efficacy of quantitative magnetic resonance imaging (qMRI) in demonstrating gross neuropathological changes associated with the disease. The Cache County Study of Aging provides a unique opportunity to determine how closely AD subjects with only the clinical diagnosis match similarly diagnosed AD subjects but with postmortem confirmation of the disease. qMRI volumes of various brain structures, as well as neuropsychological outcome measures from an expanded battery, were obtained in 31 autopsy-confirmed AD subjects and 45 clinically diagnosed AD subjects. Of the various qMRI variables examined, only total temporal lobe volume was different, where those with postmortem confirmation had reduced volume. No significant differences between the two groups were found with any of the neuropsychological outcome measures. These findings confirm the similarity in neuroimaging and neuropsychological assessment findings between those with just the clinical diagnosis of AD and those with an autopsy-confirmed diagnosis in the moderate-to-severe stage of the disease at the time of diagnosis.

  2. Professional Memory and English Teaching

    ERIC Educational Resources Information Center

    Tarpey, Paul

    2009-01-01

    This article concerns the way that research into Professional Memory (PM) in English teaching might re-connect the school subject with constituencies--the individuals, communities and social values--it once served. By PM I mean the collective memories of a generation of English teachers which, when brought into conjunction with existing histories,…

  3. A "theory of relativity" for cognitive elasticity of time and modality dimensions supporting constant working memory capacity: involvement of harmonics among ultradian clocks?

    PubMed

    Glassman, R B

    2000-02-01

    1. The capacity of working memory (WM) for about 7+/-2 ("the magical number") serially organized simple verbal items may represent a fundamental constant of cognition. Indeed, there is the same capacity for sense of familiarity of a number of recently encountered places, observed in radial maze performance both of lab rats and of humans. 2. Moreover, both species show a peculiar capacity for retaining WM of place over delays. The literature also describes paradoxes of extended time duration in certain human verbal recall tasks. Certain bird species have comparable capacity for delayed recall of about 4 to 8 food caches in a laboratory room. 3. In addition to these paradoxes of the time dimension with WM (still sometimes called "short-term" memory) there are another set of paradoxes of dimensionality for human judgment of magnitudes, noted by Miller in his classic 1956 paper on "the magical number." We are able to reliably refer magnitudes to a rating scale of up to about seven divisions. Remarkably, that finding is largely independent of perceptual modality or even of the extent of a linear interval selected within any given modality. 4. These paradoxes suggest that "the magical number 7+/2" depends on fundamental properties of mammalian brains. 5. This paper theorizes that WM numerosity is conserved as a fundamental constant, by means of elasticity of cognitive dimensionality, including the temporal pace of arrival of significant items of cognitive information. 6. A conjectural neural code for WM item-capacity is proposed here, which extends the hypothetical principle of binding-by-synchrony. The hypothesis is that several coactive frequencies of brain electrical rhythms each mark a WM item. 7. If, indeed, WM does involve a brain wave frequency code (perhaps within the gamma frequency range that has often been suggested with the binding hypothesis) mathematical considerations suggest additional relevance of harmonic relationships. That is, if copresent sinusoids

  4. Deciphering Neural Codes of Memory during Sleep

    PubMed Central

    Chen, Zhe; Wilson, Matthew A.

    2017-01-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699

  5. Research on Contextual Memorizing of Meaning in Foreign Language Vocabulary

    ERIC Educational Resources Information Center

    Xu, Linjing; Xiong, Qingxia; Qin, Yufang

    2018-01-01

    The purpose of this study was to examine the role of contexts in the memory of meaning in foreign vocabularies. The study was based on the cognitive processing hierarchy theory of Craik and Lockhart (1972), the memory trace theory of McClelland and Rumelhart (1986) and the memory trace theory of cognitive psychology. The subjects were non-English…

  6. Listening comprehension in preschoolers: the role of memory.

    PubMed

    Florit, Elena; Roch, Maja; Altoè, Gianmarco; Levorato, Maria Chiara

    2009-11-01

    The current study analyzed the relationship between text comprehension and memory skills in preschoolers. We were interested in verifying the hypothesis that memory is a specific contributor to listening comprehension in preschool children after controlling for verbal abilities. We were also interested in analyzing the developmental path of the relationship between memory skills and listening comprehension in the age range considered. Forty-four, 4-year-olds (mean age = 4 years and 6 months, SD = 4 months) and 40, 5-year-olds (mean age = 5 years and 4 months, SD = 5 months) participated in the study. The children were administered measures to evaluate listening comprehension ability (story comprehension), short-term and working memory skills (forward and backward word span), verbal intelligence and receptive vocabulary. Results showed that both short-term and working memory predicted unique and independent variance in listening comprehension after controlling for verbal abilities, with working memory explaining additional variance over and above short-term memory. The predictive power of memory skills was stable in the age range considered. Results also confirm a strong relation between verbal abilities and listening comprehension in 4- and 5-year-old children.

  7. Efficient and flexible memory architecture to alleviate data and context bandwidth bottlenecks of coarse-grained reconfigurable arrays

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun

    2014-12-01

    The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly

  8. Deciphering Neural Codes of Memory during Sleep.

    PubMed

    Chen, Zhe; Wilson, Matthew A

    2017-05-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for the consolidation of hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms in memory consolidation and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches to the analysis of sleep-associated neural codes (SANCs). We focus on two analysis paradigms for sleep-associated memory and propose a new unsupervised learning framework ('memory first, meaning later') for unbiased assessment of SANCs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach.

    PubMed

    Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.

  10. Measuring memory with the order of fractional derivative

    NASA Astrophysics Data System (ADS)

    Du, Maolin; Wang, Zaihua; Hu, Haiyan

    2013-12-01

    Fractional derivative has a history as long as that of classical calculus, but it is much less popular than it should be. What is the physical meaning of fractional derivative? This is still an open problem. In modeling various memory phenomena, we observe that a memory process usually consists of two stages. One is short with permanent retention, and the other is governed by a simple model of fractional derivative. With the numerical least square method, we show that the fractional model perfectly fits the test data of memory phenomena in different disciplines, not only in mechanics, but also in biology and psychology. Based on this model, we find that a physical meaning of the fractional order is an index of memory.

  11. A Balanced Memory Network

    PubMed Central

    Roudi, Yasser; Latham, Peter E

    2007-01-01

    A fundamental problem in neuroscience is understanding how working memory—the ability to store information at intermediate timescales, like tens of seconds—is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons. PMID:17845070

  12. Drilling and Caching Architecture for the Mars2020 Mission

    NASA Astrophysics Data System (ADS)

    Zacny, K.

    2013-12-01

    We present a Sample Acquisition and Caching (SAC) architecture for the Mars2020 mission and detail how the architecture meets the sampling requirements described in the Mars2020 Science Definition Team (SDT) report. The architecture uses 'One Bit per Core' approach. Having dedicated bit for each rock core allows a reduction in the number of core transfer steps and actuators and this reduces overall mission risk. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). To enable replacing of core samples, the drill bits are based on the BigTooth bit design. The BigTooth bit cuts a core diameter slightly smaller than the imaginary hole inscribed by the inner surfaces of the bits. Hence the rock core could be much easier ejected along the gravity vector. The architecture also has three additional types of bits that allow analysis of rocks. Rock Abrasion and Brushing Bit (RABBit) allows brushing and grinding of rocks in the same was as Rock Abrasion Tool does on MER. PreView bit allows viewing and analysis of rock core surfaces. Powder and Regolith Acquisition Bit (PRABit) captures regolith and rock powder either for in situ analysis or sample return. PRABit also allows sieving capabilities. The architecture can be viewed here: http://www.youtube.com/watch?v=_-hOO4-zDtE

  13. Differences between Presentation Methods in Working Memory Procedures: A Matter of Working Memory Consolidation

    PubMed Central

    Ricker, Timothy J.; Cowan, Nelson

    2014-01-01

    Understanding forgetting from working memory, the memory used in ongoing cognitive processing, is critical to understanding human cognition. In the last decade a number of conflicting findings have been reported regarding the role of time in forgetting from working memory. This has led to a debate concerning whether longer retention intervals necessarily result in more forgetting. An obstacle to directly comparing conflicting reports is a divergence in methodology across studies. Studies which find no forgetting as a function of retention-interval duration tend to use sequential presentation of memory items, while studies which find forgetting as a function of retention-interval duration tend to use simultaneous presentation of memory items. Here, we manipulate the duration of retention and the presentation method of memory items, presenting items either sequentially or simultaneously. We find that these differing presentation methods can lead to different rates of forgetting because they tend to differ in the time available for consolidation into working memory. The experiments detailed here show that equating the time available for working memory consolidation equates the rates of forgetting across presentation methods. We discuss the meaning of this finding in the interpretation of previous forgetting studies and in the construction of working memory models. PMID:24059859

  14. A population-based study of the association between coronary artery bypass graft surgery (CABG) and cognitive decline: the Cache County study.

    PubMed

    Lyketsos, Constantine G; Toone, Leslie; Tschanz, Joann; Corcoran, Christopher; Norton, Maria; Zandi, Peter; Munger, Ron; Breitner, John C S; Welsh-Bohmer, Kathleen

    2006-06-01

    The relationship between coronary artery bypass graft (CABG) surgery and cognitive decline remains uncertain, in particular with regard to whether there is delayed cognitive decline associated with this procedure. This was a population-based cohort study involving participants in the Cache County Study of Memory Health and Aging. At baseline the study enrolled 5,092 persons age 65 and older and followed them up three years later and again four years after that. Individuals who reported having undergone CABG surgery at study baseline or had this surgery in between follow-up waves were compared to individuals who never reported having the surgery. The main outcome measure was the Modified Mini Mental State (3MS). Multilevel models were used to examine the relationship between CABG surgery and cognitive decline over time. Study participants who had CABG surgery evidenced 0.95 points of greater decline relative to baseline on the 3MS at the first follow-up interview after CABG, and an average of 1.9 points of greater decline at the second follow-up interview, than those without CABG (t = -2.51, df = 2,316, p = 0.0121), after adjusting for several covariates, including number of vascular conditions. This decline was restricted to individuals who were more than five years past the procedure and was not evident in the early years after the surgery. CABG surgery is associated with accelerated cognitive decline more than five years after the procedure in a long-lived population. This decline is small and its clinical significance is uncertain. We could not find an association between CABG and decline in the first five post-operative years.

  15. Application of new WAIS-III/WMS-III discrepancy scores for evaluating memory functioning: relationship between intellectual and memory ability.

    PubMed

    Lange, Rael T; Chelune, Gordon J

    2006-05-01

    Analysis of the discrepancy between memory and intellectual ability has received some support as a means for evaluating memory impairment. Recently, comprehensive base rate tables for General Ability Index (GAI) minus memory discrepancy scores (i.e., GAI-memory) were developed using the WAIS-III/WMS-III standardization sample (Lange, Chelune, & Tulsky, in press). The purpose of this study was to evaluate the clinical utility of GAI-memory discrepancy scores to identify memory impairment in 34 patients with Alzheimer's type dementia (DAT) versus a sample of 34 demographically matched healthy participants. On average, patients with DAT obtained significantly lower scores on all WAIS-III and WMS-III indexes and had larger GAI-memory discrepancy scores. Clinical outcome analyses revealed that GAI-memory scores were useful at identifying memory impairment in patients with DAT versus matched healthy participants. However, GAI-memory discrepancy scores failed to provide unique interpretive information beyond that which is gained from the memory indexes alone. Implications and future research directions are discussed.

  16. Checkpoint repair for high-performance out-of-order execution machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwu, W.M.W.; Patt, Y.N.

    Out-or-order execution and branch prediction are two mechanisms that can be used profitably in the design of supercomputers to increase performance. Proper exception handling and branch prediction miss handling in an out-of-order execution machine to require some kind of repair mechanism which can restore the machine to a known previous state. In this paper the authors present a class of repair mechanisms using the concept of checkpointing. The authors derive several properties of checkpoint repair mechanisms. In addition, they provide algorithms for performing checkpoint repair that incur little overhead in time and modest cost in hardware, which also require nomore » additional complexity or time for use with write-back cache memory systems than they do with write-through cache memory systems, contrary to statements made by previous researchers.« less

  17. Locality-Conscious Lock-Free Linked Lists

    NASA Astrophysics Data System (ADS)

    Braginsky, Anastasia; Petrank, Erez

    We extend state-of-the-art lock-free linked lists by building linked lists with special care for locality of traversals. These linked lists are built of sequences of entries that reside on consecutive chunks of memory. When traversing such lists, subsequent entries typically reside on the same chunk and are thus close to each other, e.g., in same cache line or on the same virtual memory page. Such cache-conscious implementations of linked lists are frequently used in practice, but making them lock-free requires care. The basic component of this construction is a chunk of entries in the list that maintains a minimum and a maximum number of entries. This basic chunk component is an interesting tool on its own and may be used to build other lock-free data structures as well.

  18. Contrasting patterns of survival and dispersal in multiple habitats reveal an ecological trap in a food-caching bird.

    PubMed

    Norris, D Ryan; Flockhart, D T Tyler; Strickland, Dan

    2013-11-01

    A comprehensive understanding of how natural and anthropogenic variation in habitat influences populations requires long-term information on how such variation affects survival and dispersal throughout the annual cycle. Gray jays Perisoreus canadensis are widespread boreal resident passerines that use cached food to survive over the winter and to begin breeding during the late winter. Using multistate capture-recapture analysis, we examined apparent survival and dispersal in relation to habitat quality in a gray jay population over 34 years (1977-2010). Prior evidence suggests that natural variation in habitat quality is driven by the proportion of conifers on territories because of their superior ability to preserve cached food. Although neither adults (>1 year) nor juveniles (<1 year) had higher survival rates on high-conifer territories, both age classes were less likely to leave high-conifer territories and, when they did move, were more likely to disperse to high-conifer territories. In contrast, survival rates were lower on territories that were adjacent to a major highway compared to territories that did not border the highway but there was no evidence for directional dispersal towards or away from highway territories. Our results support the notion that natural variation in habitat quality is driven by the proportion of coniferous trees on territories and provide the first evidence that high-mortality highway habitats can act as an equal-preference ecological trap for birds. Reproductive success, as shown in a previous study, but not survival, is sensitive to natural variation in habitat quality, suggesting that gray jays, despite living in harsh winter conditions, likely favor the allocation of limited resources towards self-maintenance over reproduction.

  19. Synesthesia and memory: color congruency, von Restorff, and false memory effects.

    PubMed

    Radvansky, Gabriel A; Gibson, Bradley S; McNerney, M Windy

    2011-01-01

    In the current study, we explored the influence of synesthesia on memory for word lists. We tested 10 grapheme-color synesthetes who reported an experience of color when reading letters or words. We replicated a previous finding that memory is compromised when synesthetic color is incongruent with perceptual color. Beyond this, we found that, although their memory for word lists was superior overall, synesthetes did not exhibit typical color- or semantic-defined von Restorff isolation effects (von Restorff, 1933) compared with control participants. Moreover, our synesthetes exhibited a reduced Deese-Roediger-McDermott false memory effect (Deese, 1959; Roediger & McDermott, 1995). Taken as a whole, these findings are consistent with the idea that color-grapheme synesthesia can lead people to place a greater emphasis on item-specific processing and surface form characteristics of words in a list (e.g., the letters that make them up) relative to relational processing and more meaning-based processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  20. Working Memory, Short-Term Memory, and Reading Disabilities: A Selective Meta-Analysis of the Literature

    ERIC Educational Resources Information Center

    Swanson, H. Lee; Zheng, Xinhua; Jerman, Olga

    2009-01-01

    The purpose of the present study was to synthesize research that compares children with and without reading disabilities (RD) on measures of short-term memory (STM) and working memory (WM). Across a broad age, reading, and IQ range, 578 effect sizes (ESs) were computed, yielding a mean ES across studies of -0.89 (SD = 1.03). A total of 257 ESs…

  1. Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1

    DTIC Science & Technology

    2016-03-01

    REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The

  2. Transfers and Enhancements of the Teleconferencing System and Support of the Special Operations Planning Aids

    DTIC Science & Technology

    1984-10-31

    five colors , page forward, page back, erase, clear the page, store previously annotated material, and later retrieve it. From this developed a four...system to secure sites. These * enchancements are discussed below. -2- .7- -. . . --. J -. . . . .. . . . . . . . ..- . _77 . -.- 2.1 Enhancements to the...and large cache memory of the Winchester drive allows the SGWS software to run much faster when doing file access or direct memory access (DMA) than

  3. Orienting, emotion, and memory: phasic and tonic variation in heart rate predicts memory for emotional pictures in men.

    PubMed

    Abercrombie, Heather C; Chambers, Andrea S; Greischar, Lawrence; Monticelli, Roxanne M

    2008-11-01

    Arousal-related processes associated with heightened heart rate (HR) predict memory enhancement, especially for emotionally arousing stimuli. In addition, phasic HR deceleration reflects "orienting" and sensory receptivity during perception of stimuli. We hypothesized that both tonic elevations in HR as well as phasic HR deceleration during viewing of pictures would be associated with deeper encoding and better subsequent memory for stimuli. Emotional pictures are more memorable and cause greater HR deceleration than neutral pictures. Thus, we predicted that the relations between cardiac activity and memory enhancement would be most pronounced for emotionally-laden compared to neutral pictures. We measured HR in 53 males during viewing of unpleasant, neutral, and pleasant pictures, and tested memory for the pictures two days later. Phasic HR deceleration during viewing of individual pictures was greater for subsequently remembered than forgotten pictures across all three emotion categories. Elevated mean HR across the entire encoding epoch also predicted better memory performance, but only for emotionally arousing pictures. Elevated mean HR and phasic HR deceleration were associated, such that individuals with greater tonic HR also showed greater HR decelerations during picture viewing, but only for emotionally arousing pictures. Results suggest that tonic elevations in HR are associated both with greater orienting and heightened memory for emotionally arousing stimuli.

  4. Orienting, emotion, and memory: Phasic and tonic variation in heart rate predicts memory for emotional pictures in men

    PubMed Central

    Abercrombie, Heather C.; Chambers, Andrea S.; Greischar, Lawrence; Monticelli, Roxanne M.

    2008-01-01

    Arousal-related processes associated with heightened heart rate (HR) predict memory enhancement, especially for emotionally arousing stimuli. In addition, phasic HR deceleration reflects “orienting” and sensory receptivity during perception of stimuli. We hypothesized that both tonic elevations in HR as well as phasic HR deceleration during viewing of pictures would be associated with deeper encoding and better subsequent memory for stimuli. Emotional pictures are more memorable and cause greater HR deceleration than neutral pictures. Thus, we predicted that the relations between cardiac activity and memory enhancement would be most pronounced for emotionally-laden compared to neutral pictures. We measured HR in 53 males during viewing of unpleasant, neutral, and pleasant pictures, and tested memory for the pictures two days later. Phasic HR deceleration during viewing of individual pictures was greater for subsequently remembered than forgotten pictures across all three emotion categories. Elevated mean HR across the entire encoding epoch also predicted better memory performance, but only for emotionally arousing pictures. Elevated mean HR and phasic HR deceleration were associated, such that individuals with greater tonic HR also showed greater HR decelerations during picture viewing, but only for emotionally arousing pictures. Results suggest that tonic elevations in HR are associated both with greater orienting and heightened memory for emotionally arousing stimuli. PMID:18755284

  5. LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.

    PubMed

    El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher

    2016-11-01

    The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Eavesdropping on Memory.

    PubMed

    Loftus, Elizabeth F

    2017-01-03

    For more than four decades, I have been studying human memory. My research concerns the malleable nature of memory. Information suggested to an individual about an event can be integrated with the memory of the event itself, so that what actually occurred, and what was discussed later about what may have occurred, become inextricably interwoven, allowing distortion, elaboration, and even total fabrication. In my writings, classes, and public speeches, I've tried to convey one important take-home message: Just because someone tells you something in great detail, with much confidence, and with emotion, it doesn't mean that it is true. Here I describe my professional life as an experimental psychologist, in which I've eavesdropped on this process, as well as many personal experiences that may have influenced my thinking and choices.

  7. The role of working memory and declarative memory in trace conditioning

    PubMed Central

    Connor, David A.; Gould, Thomas J.

    2017-01-01

    Translational assays of cognition that are similarly implemented in both lower and higher-order species, such as rodents and primates, provide a means to reconcile preclinical modeling of psychiatric neuropathology and clinical research. To this end, Pavlovian conditioning has provided a useful tool for investigating cognitive processes in both lab animal models and humans. This review focuses on trace conditioning, a form of Pavlovian conditioning typified by the insertion of a temporal gap (i.e., trace interval) between presentations of a conditioned stimulus (CS) and an unconditioned stimulus (US). This review aims to discuss pre-clinical and clinical work investigating the mnemonic processes recruited for trace conditioning. Much work suggests that trace conditioning involves unique neurocognitive mechanisms to facilitate formation of trace memories in contrast to standard Pavlovian conditioning. For example, the hippocampus and prefrontal cortex (PFC) appear to play critical roles in trace conditioning. Moreover, cognitive mechanistic accounts in human studies suggest that working memory and declarative memory processes are engaged to facilitate formation of trace memories. The aim of this review is to integrate cognitive and neurobiological accounts of trace conditioning from preclinical and clinical studies to examine involvement of working and declarative memory. PMID:27422017

  8. Nobiletin improves emotional and novelty recognition memory but not spatial referential memory.

    PubMed

    Kang, Jiyun; Shin, Jung-Won; Kim, Yoo-Rim; Swanberg, Kelley M; Kim, Yooseung; Bae, Jae Ryong; Kim, Young Ki; Lee, Jinwon; Kim, Soo-Yeon; Sohn, Nak-Won; Maeng, Sungho

    2017-01-01

    How to maintain and enhance cognitive functions for both aged and young populations is a highly interesting subject. But candidate memory-enhancing reagents are tested almost exclusively on lesioned or aged animals. Also, there is insufficient information on the type of memory these reagents can improve. Working memory, located in the prefrontal cortex, manages short-term sensory information, but, by gaining significant relevance, this information is converted to long-term memory by hippocampal formation and/or amygdala, followed by tagging with space-time or emotional cues, respectively. Nobiletin is a product of citrus peel known for cognitive-enhancing effects in various pharmacological and neurodegenerative disease models, yet, it is not well studied in non-lesioned animals and the type of memory that nobiletin can improve remains unclear. In this study, 8-week-old male mice were tested using behavioral measurements for working, spatial referential, emotional and visual recognition memory after daily administration of nobiletin. While nobiletin did not induce any change of spontaneous activity in the open field test, freezing by fear conditioning and novel object recognition increased. However, the effectiveness of spatial navigation in the Y-maze and Morris water maze was not improved. These results mean that nobiletin can specifically improve memories of emotionally salient information associated with fear and novelty, but not of spatial information without emotional saliency. Accordingly, the use of nobiletin on normal subjects as a memory enhancer would be more effective on emotional types but may have limited value for the improvement of episodic memories.

  9. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  10. Complex-valued Multidirectional Associative Memory

    NASA Astrophysics Data System (ADS)

    Kobayashi, Masaki; Yamazaki, Haruaki

    Hopfield model is a representative associative memory. It was improved to Bidirectional Associative Memory(BAM) by Kosko and Multidirectional Associative Memory(MAM) by Hagiwara. They have two layers or multilayers. Since they have symmetric connections between layers, they ensure to converge. MAM can deal with multiples of many patterns, such as (x1, x2,…), where xm is the pattern on layer-m. Noest, Hirose and Nemoto proposed complex-valued Hopfield model. Lee proposed complex-valued Bidirectional Associative Memory. Zemel proved the rotation invariance of complex-valued Hopfield model. It means that the rotated pattern also stored. In this paper, the complex-valued Multidirectional Associative Memory is proposed. The rotation invariance is also proved. Moreover it is shown by computer simulation that the differences of angles of given patterns are automatically reduced. At first we define complex-valued Multidirectional Associative Memory. Then we define the energy function of network. By using energy function, we prove that the network ensures to converge. Next, we define the learning law and show the characteristic of recall process. The characteristic means that the differences of angles of given patterns are automatically reduced. Especially we prove the following theorem. In case that only a multiple of patterns is stored, if patterns with different angles are given to each layer, the differences are automatically reduced. Finally, we invest that the differences of angles influence the noise robustness. It reduce the noise robustness, because input to each layer become small. We show that by computer simulations.

  11. Memory conformity affects inaccurate memories more than accurate memories.

    PubMed

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  12. The plasticity of early memory reports: social pressure, hypnotizability, compliance, and interrogative suggestibility.

    PubMed

    Malinoski, P T; Lynn, S J

    1999-10-01

    Early autobiographical memory reports by adults were very sensitive to social influence in a leading interview. The mean age of initial earliest memory report was 3.7 years. When participants were instructed to close their eyes, visualize, and focus on their 2nd birthday, 59% reported a birthday memory. After repeated probes for earlier memories, 78% of subjects reported memories at or prior to 24 months of age, and 33% reported memories within the first 12 months of age. The mean age of the final earliest memory reported was 1.6 years. Participants rated their memory reports as accurate and did not recant them when given an opportunity. The age of earliest memory reports in the suggestive interview correlated negatively with measures of compliance, hypnotizability, and interrogative suggestibility.

  13. Evolution of magnetic disk subsystems

    NASA Astrophysics Data System (ADS)

    Kaneko, Satoru

    1994-06-01

    The higher recording density of magnetic disk realized today has brought larger storage capacity per unit and smaller form factors. If the required access performance per MB is constant, the performance of large subsystems has to be several times better. This article describes mainly the technology for improving the performance of the magnetic disk subsystems and the prospects of their future evolution. Also considered are 'crosscall pathing' which makes the data transfer channel more effective, 'disk cache' which improves performance coupling with solid state memory technology, and 'RAID' which improves the availability and integrity of disk subsystems by organizing multiple disk drives in a subsystem. As a result, it is concluded that since the performance of the subsystem is dominated by that of the disk cache, maximation of the performance of the disk cache subsystems is very important.

  14. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  15. Creating a false memory in the hippocampus.

    PubMed

    Ramirez, Steve; Liu, Xu; Lin, Pei-Ann; Suh, Junghyup; Pignatelli, Michele; Redondo, Roger L; Ryan, Tomás J; Tonegawa, Susumu

    2013-07-26

    Memories can be unreliable. We created a false memory in mice by optogenetically manipulating memory engram-bearing cells in the hippocampus. Dentate gyrus (DG) or CA1 neurons activated by exposure to a particular context were labeled with channelrhodopsin-2. These neurons were later optically reactivated during fear conditioning in a different context. The DG experimental group showed increased freezing in the original context, in which a foot shock was never delivered. The recall of this false memory was context-specific, activated similar downstream regions engaged during natural fear memory recall, and was also capable of driving an active fear response. Our data demonstrate that it is possible to generate an internally represented and behaviorally expressed fear memory via artificial means.

  16. Efficient Aho-Corasick String Matching on Emerging Multicore Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Secchi, Simone

    String matching algorithms are critical to several scientific fields. Beside text processing and databases, emerging applications such as DNA protein sequence analysis, data mining, information security software, antivirus, ma- chine learning, all exploit string matching algorithms [3]. All these applica- tions usually process large quantity of textual data, require high performance and/or predictable execution times. Among all the string matching algorithms, one of the most studied, especially for text processing and security applica- tions, is the Aho-Corasick algorithm. 1 2 Book title goes here Aho-Corasick is an exact, multi-pattern string matching algorithm which performs the search in a time linearlymore » proportional to the length of the input text independently from pattern set size. However, depending on the imple- mentation, when the number of patterns increase, the memory occupation may raise drastically. In turn, this can lead to significant variability in the performance, due to the memory access times and the caching effects. This is a significant concern for many mission critical applications and modern high performance architectures. For example, security applications such as Network Intrusion Detection Systems (NIDS), must be able to scan network traffic against very large dictionaries in real time. Modern Ethernet links reach up to 10 Gbps, and malicious threats are already well over 1 million, and expo- nentially growing [28]. When performing the search, a NIDS should not slow down the network, or let network packets pass unchecked. Nevertheless, on the current state-of-the-art cache based processors, there may be a large per- formance variability when dealing with big dictionaries and inputs that have different frequencies of matching patterns. In particular, when few patterns are matched and they are all in the cache, the procedure is fast. Instead, when they are not in the cache, often because many patterns are matched and the caches are

  17. Point and 5-year period prevalence of neuropsychiatric symptoms in dementia: the Cache County Study.

    PubMed

    Steinberg, Martin; Shao, Huibo; Zandi, Peter; Lyketsos, Constantine G; Welsh-Bohmer, Kathleen A; Norton, Maria C; Breitner, John C S; Steffens, David C; Tschanz, Joann T

    2008-02-01

    Neuropsychiatric symptoms are nearly universal in dementia, yet little is known about their longitudinal course in the community. To estimate point and 5-year period prevalence of neuropsychiatric symptoms in an incident sample of 408 dementia participants from the Cache County Study. The Neuropsychiatric Inventory assessed symptoms at baseline and at 1.5 years, 3.0 years, 4.1 years, and 5.3 years. Point prevalence, period prevalence and mean symptom severity at each time point were estimated. Point prevalence for delusions was 18% at baseline and 34-38% during the last three visits; hallucinations, 10% at baseline and 19-24% subsequently; agitation/aggression fluctuated between 13% and 24%; depression 29% at baseline and 41-47% subsequently; apathy increased from 20% at baseline to 51% at 5.3 years; elation never rose above 1%; anxiety 14% at baseline and 24-32% subsequently; disinhibition fluctuated between 2% and 15%; irritability between 17% and 27%; aberrant motor behavior gradually increased from 7% at baseline to 29% at 5.3 years. Point prevalence for any symptom was 56% at baseline and 76-87% subsequently. Five-year period prevalence was greatest for depression (77%), apathy (71%), and anxiety (62%); lowest for elation (6%), and disinhibition (31%). Ninety-seven percent experienced at least one symptom. Symptom severity was consistently highest for apathy. Participants were most likely to develop depression, apathy, or anxiety, and least likely to develop elation or disinhibition. Give converging evidence that syndromal definitions may more accurately capture neuropsychiatric co-morbidity in dementia, future efforts to validate such syndromes are warranted. Copyright (c) 2007 John Wiley & Sons, Ltd.

  18. Institutional Memory and the US Air Force

    DTIC Science & Technology

    2016-01-01

    38 | Air & Space Power Journal Institutional Memory and the US Air Force Lt Col Daniel J. Brown, USAF Disclaimer: The views and opinions expressed...national defense. After each ad- vance is tested in combat, a new round of intellectual sparring commences regarding Summer 2016 | 39 Institutional Memory ...the service’s institutional memory of how it fights and what it fights with—the ways and means of war fighting. Critical to maintaining its

  19. Smart Collaborative Caching for Information-Centric IoT in Fog Computing.

    PubMed

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Pau, Giovanni; Collotta, Mario; You, Ilsun; Zhang, Hong-Ke

    2017-11-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.

  20. Smart Collaborative Caching for Information-Centric IoT in Fog Computing

    PubMed Central

    Song, Fei; Ai, Zheng-Yang; Li, Jun-Jie; Zhang, Hong-Ke

    2017-01-01

    The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies. PMID:29104219