Ecstasy (MDMA) and memory function: a meta-analytic update.
Laws, Keith R; Kokkalis, Joy
2007-08-01
A meta-analysis was conducted to examine the impact of recreational ecstasy use on short-term memory (STM), long-term memory (LTM), verbal and visual memory. We located 26 studies containing memory data for ecstasy and non-ecstasy users from which effect sizes could be derived. The analyses provided measures of STM and LTM in 610 and 439 ecstasy users and revealed moderate-to-large effect sizes (Cohen's d) of d = -0.63 and d = -0.87, respectively. The difference between STM versus LTM was non-significant. The effect size for verbal memory was large (d = -1.00) and significantly larger than the small effect size for visual memory (d = -0.27). Indeed, our analyses indicate that visual memory may be affected more by concurrent cannabis use. Finally, we found that the total lifetime number of ecstasy tablets consumed did not significantly predict memory performance. Copyright 2007 John Wiley & Sons, Ltd.
Eyewitness Recall: Regulation of Grain Size and the Role of Confidence
ERIC Educational Resources Information Center
Weber, Nathan; Brewer, Neil
2008-01-01
Eyewitness testimony plays a critical role in Western legal systems. Three experiments extended M. Goldsmith, A. Koriat, and A. Weinberg-Eliezer's (2002) framework of the regulation of grain size (precision vs. coarseness) of memory reports to eyewitness memory. In 2 experiments, the grain size of responses had a large impact on memory accuracy.…
Illusory expectations can affect retrieval-monitoring accuracy.
McDonough, Ian M; Gallo, David A
2012-03-01
The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Shu, Feng; Liu, Xingwen; Li, Min
2018-05-01
Memory is an important factor on the evolution of cooperation in spatial structure. For evolutionary biologists, the problem is often how cooperation acts can emerge in an evolving system. In the case of snowdrift game, it is found that memory can boost cooperation level for large cost-to-benefit ratio r, while inhibit cooperation for small r. Thus, how to enlarge the range of r for the purpose of enhancing cooperation becomes a hot issue recently. This paper addresses a new memory-based approach and its core lies in: Each agent applies the given rule to compare its own historical payoffs in a certain memory size, and take the obtained maximal one as virtual payoff. In order to get the optimal strategy, each agent randomly selects one of its neighbours to compare their virtual payoffs, which can lead to the optimal strategy. Both constant-size memory and size-varying memory are investigated by means of a scenario of asynchronous updating algorithm on regular lattices with different sizes. Simulation results show that this approach effectively enhances cooperation level in spatial structure and makes the high cooperation level simultaneously emerge for both small and large r. Moreover, it is discovered that population sizes have a significant influence on the effects of cooperation.
Temporal variability and memory in sediment transport in an experimental step-pool channel
NASA Astrophysics Data System (ADS)
Saletti, Matteo; Molnar, Peter; Zimmermann, André; Hassan, Marwan A.; Church, Michael
2015-11-01
Temporal dynamics of sediment transport in steep channels using two experiments performed in a steep flume (8%) with natural sediment composed of 12 grain sizes are studied. High-resolution (1 s) time series of sediment transport were measured for individual grain-size classes at the outlet of the flume for different combinations of sediment input rates and flow discharges. Our aim in this paper is to quantify (a) the relation of discharge and sediment transport and (b) the nature and strength of memory in grain-size-dependent transport. None of the simple statistical descriptors of sediment transport (mean, extreme values, and quantiles) display a clear relation with water discharge, in fact a large variability between discharge and sediment transport is observed. Instantaneous transport rates have probability density functions with heavy tails. Bed load bursts have a coarser grain-size distribution than that of the entire experiment. We quantify the strength and nature of memory in sediment transport rates by estimating the Hurst exponent and the autocorrelation coefficient of the time series for different grain sizes. Our results show the presence of the Hurst phenomenon in transport rates, indicating long-term memory which is grain-size dependent. The short-term memory in coarse grain transport increases with temporal aggregation and this reveals the importance of the sampling duration of bed load transport rates in natural streams, especially for large fractions.
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
Paula, Jonas Jardim de; Miranda, Débora Marques; Nicolato, Rodrigo; Moraes, Edgar Nunes de; Bicalho, Maria Aparecida Camargos; Malloy-Diniz, Leandro Fernandes
2013-09-01
Depressive pseudodementia (DPD) is a clinical condition characterized by depressive symptoms followed by cognitive and functional impairment characteristics of dementia. Memory complaints are one of the most related cognitive symptoms in DPD. The present study aims to assess the verbal learning profile of elderly patients with DPD. Ninety-six older adults (34 DPD and 62 controls) were assessed by neuropsychological tests including the Rey auditory-verbal learning test (RAVLT). A multivariate general linear model was used to assess group differences and controlled for demographic factors. Moderate or large effects were found on all RAVLT components, except for short-term and recognition memory. DPD impairs verbal memory, with large effect size on free recall and moderate effect size on the learning. Short-term storage and recognition memory are useful in clinical contexts when the differential diagnosis is required.
Visual working memory capacity and the medial temporal lobe.
Jeneson, Annette; Wixted, John T; Hopkins, Ramona O; Squire, Larry R
2012-03-07
Patients with medial temporal lobe (MTL) damage are sometimes impaired at remembering visual information across delays as short as a few seconds. Such impairments could reflect either impaired visual working memory capacity or impaired long-term memory (because attention has been diverted or because working memory capacity has been exceeded). Using a standard change-detection task, we asked whether visual working memory capacity is intact or impaired after MTL damage. Five patients with hippocampal lesions and one patient with large MTL lesions saw an array of 1, 2, 3, 4, or 6 colored squares, followed after 3, 4, or 8 s by a second array where one of the colored squares was cued. The task was to decide whether the cued square had the same color as the corresponding square in the first array or a different color. At the 1 s delay typically used to assess working memory capacity, patients performed as well as controls at all array sizes. At the longer delays, patients performed as well as controls at small array sizes, thought to be within the capacity limit, and worse than controls at large array sizes, thought to exceed the capacity limit. The findings suggest that visual working memory capacity in humans is intact after damage to the MTL structures and that damage to these structures impairs performance only when visual working memory is insufficient to support performance.
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Research on memory management in embedded systems
NASA Astrophysics Data System (ADS)
Huang, Xian-ying; Yang, Wu
2005-12-01
Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.
Theory for long memory in supply and demand
NASA Astrophysics Data System (ADS)
Lillo, Fabrizio; Mike, Szabolcs; Farmer, J. Doyne
2005-06-01
Recent empirical studies have demonstrated long-memory in the signs of orders to buy or sell in financial markets [J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart, Quant. Finance 4, 176 (2004); F. Lillo and J. D. Farmer Dyn. Syst. Appl. 8, 3 (2004)]. We show how this can be caused by delays in market clearing. Under the common practice of order splitting, large orders are broken up into pieces and executed incrementally. If the size of such large orders is power-law distributed, this gives rise to power-law decaying autocorrelations in the signs of executed orders. More specifically, we show that if the cumulative distribution of large orders of volume v is proportional to v-α and the size of executed orders is constant, the autocorrelation of order signs as a function of the lag τ is asymptotically proportional to τ-(α-1) . This is a long-memory process when α<2 . With a few caveats, this gives a good match to the data. A version of the model also shows long-memory fluctuations in order execution rates, which may be relevant for explaining the long memory of price diffusion rates.
Theory for long memory in supply and demand.
Lillo, Fabrizio; Mike, Szabolcs; Farmer, J Doyne
2005-06-01
Recent empirical studies have demonstrated long-memory in the signs of orders to buy or sell in financial markets [J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart, Quant. Finance 4, 176 (2004); F. Lillo and J. D. Farmer Dyn. Syst. Appl. 8, 3 (2004)]. We show how this can be caused by delays in market clearing. Under the common practice of order splitting, large orders are broken up into pieces and executed incrementally. If the size of such large orders is power-law distributed, this gives rise to power-law decaying autocorrelations in the signs of executed orders. More specifically, we show that if the cumulative distribution of large orders of volume v is proportional to v(-alpha) and the size of executed orders is constant, the autocorrelation of order signs as a function of the lag tau is asymptotically proportional to tau(-(alpha-1)). This is a long-memory process when alpha < 2. With a few caveats, this gives a good match to the data. A version of the model also shows long-memory fluctuations in order execution rates, which may be relevant for explaining the long memory of price diffusion rates.
Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.
Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine
2015-03-15
Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.
Temporal pattern and memory in sediment transport in an experimental step-pool channel
NASA Astrophysics Data System (ADS)
Saletti, Matteo; Molnar, Peter; Zimmermann, André; Hassan, Marwan A.; Church, Michael; Burlando, Paolo
2015-04-01
In this work we study the complex dynamics of sediment transport and bed morphology in steep streams, using a dataset of experiments performed in a steep flume with natural sediment. High-resolution (1 sec) time series of sediment transport were measured for individual size classes at the outlet of the flume for different combinations of sediment input rates, discharges, and flume slopes. The data show that the relation between instantaneous discharge and sediment transport exhibits large variability on different levels. After dividing the time series into segments of constant water discharge, we quantify the statistical properties of transport rates by fitting the data with a Generalized Extreme Value distribution, whose 3 parameters are related to the average sediment flux. We analyze separately extreme events of transport rate in terms of their fractional composition; if only events of high magnitude are considered, coarse grains become the predominant component of the total sediment yield. We quantify the memory in grain size dependent sediment transport with variance scaling and autocorrelation analyses; more specifically, we study how the variance changes with different aggregation scales and how the autocorrelation coefficient changes with different time lags. Our results show that there is a tendency to an infinite memory regime in transport rate signals, which is limited by the intermittency of the largest fractions. Moreover, the structure of memory is both grain size-dependent and magnitude-dependent: temporal autocorrelation is stronger for small grain size fractions and when the average sediment transport rate is large. The short-term memory in coarse grain transport increases with temporal aggregation and this reveals the importance of the sampling frequency of bedload transport rates in natural streams, especially for large fractions.
SIproc: an open-source biomedical data processing platform for large hyperspectral images.
Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David
2017-04-10
There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
Ono, Miyuki; Devilly, Grant J; Shum, David H K
2016-03-01
A number of studies suggest that a history of trauma, depression, and posttraumatic stress disorder (PTSD) are associated with autobiographical memory deficits, notably overgeneral memory (OGM). However, whether there are any group differences in the nature and magnitude of OGM has not been evaluated. Thus, a meta-analysis was conducted to quantify group differences in OGM. The effect sizes were pooled from studies examining the effect on OGM from a history of trauma (e.g., childhood sexual abuse), and the presence of PTSD or current depression (e.g., major depressive disorder). Using multiple search engines, 13 trauma studies and 12 depression studies were included in this review. A depression effect was observed on OGM with a large effect size, and was more evident by the lack of specific memories, especially to positive cues. An effect of trauma history on OGM was observed with a medium effect size, and this was most evident by the presence of overgeneral responses to negative cues. The results also suggested an amplified memory deficit in the presence of PTSD. That is, the effect sizes of OGM among individuals with PTSD were very large and relatively equal across different types of OGM. Future studies that directly compare the differences of OGM among 4 samples (i.e., controls, current depression without trauma history, trauma history without depression, and trauma history and depression) would be warranted to verify the current findings. (c) 2016 APA, all rights reserved).
Kinetic Inductance Memory Cell and Architecture for Superconducting Computers
NASA Astrophysics Data System (ADS)
Chen, George J.
Josephson memory devices typically use a superconducting loop containing one or more Josephson junctions to store information. The magnetic inductance of the loop in conjunction with the Josephson junctions provides multiple states to store data. This thesis shows that replacing the magnetic inductor in a memory cell with a kinetic inductor can lead to a smaller cell size. However, magnetic control of the cells is lost. Thus, a current-injection based architecture for a memory array has been designed to work around this problem. The isolation between memory cells that magnetic control provides is provided through resistors in this new architecture. However, these resistors allow leakage current to flow which ultimately limits the size of the array due to power considerations. A kinetic inductance memory array will be limited to 4K bits with a read access time of 320 ps for a 1 um linewidth technology. If a power decoder could be developed, the memory architecture could serve as the blueprint for a fast (<1 ns), large scale (>1 Mbit) superconducting memory array.
Kim, Tae-Wook; Choi, Hyejung; Oh, Seung-Hwan; Jo, Minseok; Wang, Gunuk; Cho, Byungjin; Kim, Dong-Yu; Hwang, Hyunsang; Lee, Takhee
2009-01-14
The resistive switching characteristics of polyfluorene-derivative polymer material in a sub-micron scale via-hole device structure were investigated. The scalable via-hole sub-microstructure was fabricated using an e-beam lithographic technique. The polymer non-volatile memory devices varied in size from 40 x 40 microm(2) to 200 x 200 nm(2). From the scaling of junction size, the memory mechanism can be attributed to the space-charge-limited current with filamentary conduction. Sub-micron scale polymer memory devices showed excellent resistive switching behaviours such as a large ON/OFF ratio (I(ON)/I(OFF) approximately 10(4)), excellent device-to-device switching uniformity, good sweep endurance, and good retention times (more than 10,000 s). The successful operation of sub-micron scale memory devices of our polyfluorene-derivative polymer shows promise to fabricate high-density polymer memory devices.
NASA Astrophysics Data System (ADS)
Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon
2016-01-01
Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07377d
Electronic shift register memory based on molecular electron-transfer reactions
NASA Technical Reports Server (NTRS)
Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.
1989-01-01
The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.
NASA Astrophysics Data System (ADS)
Yang, Chen; Liu, LeiBo; Yin, ShouYi; Wei, ShaoJun
2014-12-01
The computational capability of a coarse-grained reconfigurable array (CGRA) can be significantly restrained due to data and context memory bandwidth bottlenecks. Traditionally, two methods have been used to resolve this problem. One method loads the context into the CGRA at run time. This method occupies very small on-chip memory but induces very large latency, which leads to low computational efficiency. The other method adopts a multi-context structure. This method loads the context into the on-chip context memory at the boot phase. Broadcasting the pointer of a set of contexts changes the hardware configuration on a cycle-by-cycle basis. The size of the context memory induces a large area overhead in multi-context structures, which results in major restrictions on application complexity. This paper proposes a Predictable Context Cache (PCC) architecture to address the above context issues by buffering the context inside a CGRA. In this architecture, context is dynamically transferred into the CGRA. Utilizing a PCC significantly reduces the on-chip context memory and the complexity of the applications running on the CGRA is no longer restricted by the size of the on-chip context memory. Data preloading is the most frequently used approach to hide input data latency and speed up the data transmission process for the data bandwidth issue. Rather than fundamentally reducing the amount of input data, the transferred data and computations are processed in parallel. However, the data preloading method cannot work efficiently because data transmission becomes the critical path as the reconfigurable array scale increases. This paper also presents a Hierarchical Data Memory (HDM) architecture as a solution to the efficiency problem. In this architecture, high internal bandwidth is provided to buffer both reused input data and intermediate data. The HDM architecture relieves the external memory from the data transfer burden so that the performance is significantly improved. As a result of using PCC and HDM, experiments running mainstream video decoding programs achieved performance improvements of 13.57%-19.48% when there was a reasonable memory size. Therefore, 1080p@35.7fps for H.264 high profile video decoding can be achieved on PCC and HDM architecture when utilizing a 200 MHz working frequency. Further, the size of the on-chip context memory no longer restricted complex applications, which were efficiently executed on the PCC and HDM architecture.
Spatial short-term memory is impaired in dependent betel quid chewers.
Chiu, Meng-Chun; Shen, Bin; Li, Shuo-Heng; Ho, Ming-Chou
2016-08-01
Betel quid is regarded as a human carcinogen by the World Health Organization. It remains unknown whether chewing betel quid has a chronic effect on healthy betel quid chewers' memory. The present study aims to investigate whether chewing betel quid can affect short-term memory (STM). Three groups of participants (24 dependent chewers, 24 non-dependent chewers, and 24 non-chewers) were invited to carry out the matrix span task, the object span task, and the digit span task. All span tasks' results were adopted to assess spatial STM, visual STM, and verbal STM, respectively. Besides, there are three set sizes (small, medium, and large) in each span task. For the matrix span task, results showed that the dependent chewers had worse performances than the non-dependent chewers and the non-chewers at medium and large set sizes. For the object span task and digit span task, there were no differences in between groups. In each group, recognition performances were worse with the increasing set size and showing successful manipulation of memory load. The current study provided the first evidence that dependent betel quid chewing can selectively impair spatial STM rather than visual STM and verbal STM. Theoretical and practical implications of this result are discussed.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
Dependence of Grain Size on the Performance of a Polysilicon Channel TFT for 3D NAND Flash Memory.
Kim, Seung-Yoon; Park, Jong Kyung; Hwang, Wan Sik; Lee, Seung-Jun; Lee, Ki-Hong; Pyi, Seung Ho; Cho, Byung Jin
2016-05-01
We investigated the dependence of grain size on the performance of a polycrystalline silicon (poly-Si) channel TFT for application to 3D NAND Flash memory devices. It has been found that the device performance and memory characteristics are strongly affected by the grain size of the poly-Si channel. Higher on-state current, faster program speed, and poor endurance/reliability properties are observed when the poly-Si grain size is large. These are mainly attributed to the different local electric field induced by an oxide valley at the interface between the poly-Si channel and the gate oxide. In addition, the trap density at the gate oxide interface was successfully measured using a charge pumping method by the separation between the gate oxide interface traps and traps at the grain boundaries in the poly-Si channel. The poly-Si channel with larger grain size has lower interface trap density.
The effect of path length and display size on memory for spatial information.
Guérard, Katherine; Tremblay, Sébastien
2012-01-01
In serial memory for spatial information, some studies showed that recall performance suffers when the distance between successive locations increases relatively to the size of the display in which they are presented (the path length effect; e.g., Parmentier et al., 2005) but not when distance is increased by enlarging the size of the display (e.g., Smyth & Scholey, 1994). In the present study, we examined the effect of varying the absolute and relative distance between to-be-remembered items on memory for spatial information. We manipulated path length using small (15″) and large (64″) screens within the same design. In two experiments, we showed that distance was disruptive mainly when it is varied relatively to a fixed reference frame, though increasing the size of the display also had a small deleterious effect on recall. The insertion of a retention interval did not influence these effects, suggesting that rehearsal plays a minor role in mediating the effects of distance on serial spatial memory. We discuss the potential role of perceptual organization in light of the pattern of results.
The reliability and stability of visual working memory capacity.
Xu, Z; Adam, K C S; Fang, X; Vogel, E K
2018-04-01
Because of the central role of working memory capacity in cognition, many studies have used short measures of working memory capacity to examine its relationship to other domains. Here, we measured the reliability and stability of visual working memory capacity, measured using a single-probe change detection task. In Experiment 1, the participants (N = 135) completed a large number of trials of a change detection task (540 in total, 180 each of set sizes 4, 6, and 8). With large numbers of both trials and participants, reliability estimates were high (α > .9). We then used an iterative down-sampling procedure to create a look-up table for expected reliability in experiments with small sample sizes. In Experiment 2, the participants (N = 79) completed 31 sessions of single-probe change detection. The first 30 sessions took place over 30 consecutive days, and the last session took place 30 days later. This unprecedented number of sessions allowed us to examine the effects of practice on stability and internal reliability. Even after much practice, individual differences were stable over time (average between-session r = .76).
Perturbation schedule does not alter retention of a locomotor adaptation across days.
Hussain, Sara J; Morton, Susanne M
2014-06-15
Motor adaptation in response to gradual vs. abrupt perturbation schedules may involve different neural mechanisms, potentially leading to different levels of motor memory. However, no study has investigated whether perturbation schedules alter memory of a locomotor adaptation across days. We measured adaptation and retention (memory) of altered interlimb symmetry during walking in two groups of participants over 2 days. On day 1, participants adapted to either a single, large perturbation (abrupt schedule) or a series of small perturbations that increased in size over time (gradual schedule). Retention was examined on day 2. On day 1, initial swing time and foot placement symmetry error sizes differed between groups but overall adaptation magnitudes were similar. On day 2, participants in both groups showed similar retention, readaptation, and aftereffect sizes, although there were some trends for improved memory in the abrupt group. These results conflict with previous data but are consistent with newer studies reporting no behavioral differences following adaptation using abrupt vs. gradual schedules. Although memory levels were very similar between groups, we cannot rule out the possibility that the neural mechanisms underlying this memory storage differ. Overall, it appears that adaptation of locomotor patterns via abrupt and gradual perturbation schedules produces similar expression of locomotor memories across days. Copyright © 2014 the American Physiological Society.
Quantum Computation of Fluid Dynamics
1998-02-16
state of the quantum computer’s "memory". With N qubits, the quantum state IT) resides in an exponentially large Hilbert space with 2 N dimensions. A new...size of the Hilbert space in which the entanglement occurs. And to make matters worse, even if a quantum computer was constructed with a large number of...number of qubits "* 2 N is the size of the full Hilbert space "* 2 B is the size of the on-site submanifold, denoted 71 "* B is the size of the
Application-Controlled Demand Paging for Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)
1997-01-01
In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.
Transparent Meta-Analysis: Does Aging Spare Prospective Memory with Focal vs. Non-Focal Cues?
Uttl, Bob
2011-01-01
Background Prospective memory (ProM) is the ability to become aware of a previously-formed plan at the right time and place. For over twenty years, researchers have been debating whether prospective memory declines with aging or whether it is spared by aging and, most recently, whether aging spares prospective memory with focal vs. non-focal cues. Two recent meta-analyses examining these claims did not include all relevant studies and ignored prevalent ceiling effects, age confounds, and did not distinguish between prospective memory subdomains (e.g., ProM proper, vigilance, habitual ProM) (see Uttl, 2008, PLoS ONE). The present meta-analysis focuses on the following questions: Does prospective memory decline with aging? Does prospective memory with focal vs. non-focal cues decline with aging? Does the size of age-related declines with focal vs. non-focal cues vary across ProM subdomains? And are age-related declines in ProM smaller than age-related declines in retrospective memory? Methods and Findings A meta-analysis of event-cued ProM using data visualization and modeling, robust count methods, and conventional meta-analysis techniques revealed that first, the size of age-related declines in ProM with both focal and non-focal cues are large. Second, age-related declines in ProM with focal cues are larger in ProM proper and smaller in vigilance. Third, age-related declines in ProM proper with focal cues are as large as age-related declines in recall measures of retrospective memory. Conclusions The results are consistent with Craik's (1983) proposal that age-related declines on ProM tasks are generally large, support the distinction between ProM proper vs. vigilance, and directly contradict widespread claims that ProM, with or without focal cues, is spared by aging. PMID:21304905
Bigger is better and worse: on the intricate relationship between hippocampal size and memory.
Molnár, Katalin; Kéri, Szabolcs
2014-04-01
The structure-function relationship between the hippocampal region and memory is a debated topic in the literature. It has been suggested that larger hippocampi are associated with less effective memory performance in healthy young adults because of a partial synaptic pruning. Here, we tested this hypothesis in individuals with Fragile X Syndrome (FXS) with known abnormal pruning and IQ- and age-matched individuals with hypoxic brain injury, preterm birth, and obstetric complications. Results revealed larger normalized hippocampal volume in FXS compared with neurotypical controls, whereas individuals with hypoxic injury had smaller hippocampi. In neurotypical controls and individuals with hypoxic injury, better general memory, as indexed by the Wechsler Memory Scale-Revised, was associated with larger hippocampus. In contrast, in FXS we observed the opposite relationship: larger hippocampus was associated with worse general memory. Caudate volume did not correlate with memory in either group. These results suggest that incomplete pruning in young healthy adults may not contribute to less efficient memory capacity, and hippocampal size is positively associated with memory performance. However, abnormally large and poorly pruned hippocampus may indeed be less effective in FXS. Copyright © 2014 Elsevier Ltd. All rights reserved.
Episodic and working memory function in Primary Progressive Aphasia: A meta-analysis.
Eikelboom, Willem S; Janssen, Nikki; Jiskoot, Lize C; van den Berg, Esther; Roelofs, Ardi; Kessels, Roy P C
2018-06-18
The distinction between Primary Progressive Aphasia (PPA) variants remains challenging for clinicians, especially for the non-fluent (nfv-PPA) and the logopenic variants (lv-PPA). Previous research suggests that memory tests might aid this differentiation. This meta-analysis compares memory function among PPA variants. Effects sizes were extracted from 41 studies (N = 849). Random-effects models were used to compare performance on episodic and working memory tests among PPA patients and healthy controls, and between the PPA variants. Memory deficits were frequently observed in PPA compared to controls, with large effect sizes for lv-PPA (Hedges' g = -2.04 [-2.58 to -1.49]), nfv-PPA (Hedges' g = -1.34 [-1.69 to -1.00]), and the semantic variant (sv-PPA; Hedges' g = -1.23 [-1.50 to -0.97]). Sv-PPA showed primarily verbal memory deficits, whereas lv-PPA showed worse performance than nfv-PPA on both verbal and non-verbal memory tests. Memory deficits were more pronounced in lv-PPA compared to nfv-PPA. This suggests that memory tests may be helpful to distinguish between these PPA variants. Copyright © 2018. Published by Elsevier Ltd.
Freas, C A; Bingman, K; Ladage, L D; Pravosudov, V V
2013-01-01
Variation in environmental conditions associated with differential selection on spatial memory has been hypothesized to result in evolutionary changes in the morphology of the hippocampus, a brain region involved in spatial memory. At the same time, it is well known that the morphology of the hippocampus might also be directly affected by environmental conditions. Understanding the role of environment-based plasticity is therefore critical when investigating potential adaptive evolutionary changes in the hippocampus associated with environmental variation. We previously demonstrated large elevation-related variation in hippocampus morphology in mountain chickadees over an extremely small spatial scale. We hypothesized that this variation is related to differential selection pressures associated with differences in winter climate severity along an elevation gradient, which make different demands on spatial memory used for food cache retrieval. Here, we tested whether such variation is experience based, generated by potential differences in the environment, by comparing the hippocampus morphology of chickadees from different elevations maintained in a uniform captive environment in a laboratory with those sampled directly from the wild. In addition, we compared hippocampal neuron soma size in chickadees sampled directly from the wild with those maintained in laboratory conditions with restricted and unrestricted spatial memory use via manipulation of food-caching experiences to test whether memory use can affect neuron soma size. There were significant elevation-related differences in hippocampus volume and the total number of hippocampal neurons, but not in neuron soma size, in captive birds. Captive environmental conditions were associated with a large reduction in hippocampus volume and neuron soma size, but not in the total number of neurons or in neuron soma size in other telencephalic regions. Restriction of memory use while in laboratory conditions produced no significant effects on hippocampal neuron soma size. Overall our results showed that captivity has a strong effect on hippocampus volume, which could be due, at least partly, to a reduction in neuron soma size specifically in the hippocampus, but it did not override elevation-related differences in hippocampus volume or in the total number of hippocampal neurons. These data are consistent with the idea of the adaptive nature of the elevation-related differences associated with selection on spatial memory, while at the same time demonstrating additional environment-based plasticity in hippocampus volume, but not in neuron numbers. Our results, however, cannot rule out that the differences between elevations might still be driven by some developmental or early posthatching conditions/experiences. © 2013 S. Karger AG, Basel.
Musicians have better memory than nonmusicians: A meta-analysis.
Talamini, Francesca; Altoè, Gianmarco; Carretti, Barbara; Grassi, Massimo
2017-01-01
Several studies have found that musicians perform better than nonmusicians in memory tasks, but this is not always the case, and the strength of this apparent advantage is unknown. Here, we conducted a meta-analysis with the aim of clarifying whether musicians perform better than nonmusicians in memory tasks. Education Source; PEP (WEB)-Psychoanalytic Electronic Publishing; Psychology and Behavioral Science (EBSCO); PsycINFO (Ovid); PubMed; ScienceDirect-AllBooks Content (Elsevier API); SCOPUS (Elsevier API); SocINDEX with Full Text (EBSCO) and Google Scholar were searched for eligible studies. The selected studies involved two groups of participants: young adult musicians and nonmusicians. All the studies included memory tasks (loading long-term, short-term or working memory) that contained tonal, verbal or visuospatial stimuli. Three meta-analyses were run separately for long-term memory, short-term memory and working memory. We collected 29 studies, including 53 memory tasks. The results showed that musicians performed better than nonmusicians in terms of long-term memory, g = .29, 95% CI (.08-.51), short-term memory, g = .57, 95% CI (.41-.73), and working memory, g = .56, 95% CI (.33-.80). To further explore the data, we included a moderator (the type of stimulus presented, i.e., tonal, verbal or visuospatial), which was found to influence the effect size for short-term and working memory, but not for long-term memory. In terms of short-term and working memory, the musicians' advantage was large with tonal stimuli, moderate with verbal stimuli, and small or null with visuospatial stimuli. The three meta-analyses revealed a small effect size for long-term memory, and a medium effect size for short-term and working memory, suggesting that musicians perform better than nonmusicians in memory tasks. Moreover, the effect of the moderator suggested that, the type of stimuli influences this advantage.
Musicians have better memory than nonmusicians: A meta-analysis
Altoè, Gianmarco; Carretti, Barbara; Grassi, Massimo
2017-01-01
Background Several studies have found that musicians perform better than nonmusicians in memory tasks, but this is not always the case, and the strength of this apparent advantage is unknown. Here, we conducted a meta-analysis with the aim of clarifying whether musicians perform better than nonmusicians in memory tasks. Methods Education Source; PEP (WEB)—Psychoanalytic Electronic Publishing; Psychology and Behavioral Science (EBSCO); PsycINFO (Ovid); PubMed; ScienceDirect—AllBooks Content (Elsevier API); SCOPUS (Elsevier API); SocINDEX with Full Text (EBSCO) and Google Scholar were searched for eligible studies. The selected studies involved two groups of participants: young adult musicians and nonmusicians. All the studies included memory tasks (loading long-term, short-term or working memory) that contained tonal, verbal or visuospatial stimuli. Three meta-analyses were run separately for long-term memory, short-term memory and working memory. Results We collected 29 studies, including 53 memory tasks. The results showed that musicians performed better than nonmusicians in terms of long-term memory, g = .29, 95% CI (.08–.51), short-term memory, g = .57, 95% CI (.41–.73), and working memory, g = .56, 95% CI (.33–.80). To further explore the data, we included a moderator (the type of stimulus presented, i.e., tonal, verbal or visuospatial), which was found to influence the effect size for short-term and working memory, but not for long-term memory. In terms of short-term and working memory, the musicians’ advantage was large with tonal stimuli, moderate with verbal stimuli, and small or null with visuospatial stimuli. Conclusions The three meta-analyses revealed a small effect size for long-term memory, and a medium effect size for short-term and working memory, suggesting that musicians perform better than nonmusicians in memory tasks. Moreover, the effect of the moderator suggested that, the type of stimuli influences this advantage. PMID:29049416
Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon
2016-01-21
Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.
Fabry-Perot confocal resonator optical associative memory
NASA Astrophysics Data System (ADS)
Burns, Thomas J.; Rogers, Steven K.; Vogel, George A.
1993-03-01
A unique optical associative memory architecture is presented that combines the optical processing environment of a Fabry-Perot confocal resonator with the dynamic storage and recall properties of volume holograms. The confocal resonator reduces the size and complexity of previous associative memory architectures by folding a large number of discrete optical components into an integrated, compact optical processing environment. Experimental results demonstrate the system is capable of recalling a complete object from memory when presented with partial information about the object. A Fourier optics model of the system's operation shows it implements a spatially continuous version of a discrete, binary Hopfield neural network associative memory.
Memory-based snowdrift game on a square lattice
NASA Astrophysics Data System (ADS)
Shu, Feng; Liu, Xingwen; Fang, Kai; Chen, Hao
2018-04-01
Spatial reciprocity is an effective way widely accepted to facilitate cooperation. In the case of snowdrift game, some researches showed that spatial reciprocity inhibits cooperation for a very wide range of cost-to-benefit ratio r. However, some other researches found that based on the spatial reciprocity, a wider range of r is helpful to achieve a high cooperation level. Thus, how to enlarge the range of r for the purpose of promoting cooperation becomes a hot topic recently. This paper proposes a new memory-based method, in which each individual compares with its own previous payoffs to find out the maximal one as virtual payoff and then randomly compares with one of its neighbours to obtain the optimal strategy according to the given updating rules. It shows the positive effect of spatial reciprocity in the context of memory. Specifically, in this situation, not only the lower ratio can appear a high cooperation level, but also the larger ratio r can emerge a high cooperation level. That is, an expected cooperation level can be achieved simultaneously for small and large r. Furthermore, the scenarios of both constant-size memory and size-varying memory are investigated. An interesting phenomenon is discovered that the cooperation level drops down gradually as the memory size increases.
How many pixels make a memory? Picture memory for small pictures.
Wolfe, Jeremy M; Kuzmova, Yoana I
2011-06-01
Torralba (Visual Neuroscience, 26, 123-131, 2009) showed that, if the resolution of images of scenes were reduced to the information present in very small "thumbnail images," those scenes could still be recognized. The objects in those degraded scenes could be identified, even though it would be impossible to identify them if they were removed from the scene context. Can tiny and/or degraded scenes be remembered, or are they like brief presentations, identified but not remembered. We report that memory for tiny and degraded scenes parallels the recognizability of those scenes. You can remember a scene to approximately the degree to which you can classify it. Interestingly, there is a striking asymmetry in memory when scenes are not the same size on their initial appearance and subsequent test. Memory for a large, full-resolution stimulus can be tested with a small, degraded stimulus. However, memory for a small stimulus is not retrieved when it is tested with a large stimulus.
Set-Membership Identification for Robust Control Design
1993-04-28
system G can be regarded as having no memory in (18) in terms of G and 0, we get of events prior to t = 1, the initial time. Roughly, this means all...algorithm in [1]. Also in our application, the size of the matrices involved is quite large and special attention should be paid to the memory ...management and algorithmic implementation; otherwise huge amounts of memory will be required to perform the optimization even for modest values of M and N
Effects of Picture Size and Placement on Memory for Written Words.
ERIC Educational Resources Information Center
Blischak, Doreen M.; McDaniel, Mark A.
1995-01-01
Normally developing kindergarten children (n=45) were shown written words under 4 conditions representing various size and position relationships between line drawings and orthography. Results showed superior performance for word-only and enhanced-word conditions, over those conditions pairing small or large drawings with written words. Results…
Memory bias for threatening information in anxiety and anxiety disorders: a meta-analytic review.
Mitte, Kristin
2008-11-01
Although some theories suggest that anxious individuals selectively remember threatening stimuli, findings remain contradictory despite a considerable amount of research. A quantitative integration of 165 studies with 9,046 participants (clinical and nonclinical samples) examined whether a memory bias exists and which moderator variables influence its magnitude. Implicit memory bias was investigated in lexical decision/stimulus identification and word-stem completion paradigms; explicit memory bias was investigated in recognition and recall paradigms. Overall, effect sizes showed no significant impact of anxiety on implicit memory and recognition. Analyses indicated a memory bias for recall, whose magnitude depended on experimental study procedures like the encoding procedure or retention interval. Anxiety influenced recollection of previous experiences; anxious individuals favored threat-related information. Across all paradigms, clinical status was not significantly linked to effect sizes, indicating no qualitative difference in information processing between anxiety patients and high-anxious persons. The large discrepancy between study effects in recall and recognition indicates that future research is needed to identify moderator variables for avoidant and preferred remembering.
A 16K-bit static IIL RAM with 25-ns access time
NASA Astrophysics Data System (ADS)
Inabe, Y.; Hayashi, T.; Kawarada, K.; Miwa, H.; Ogiue, K.
1982-04-01
A 16,384 x 1-bit RAM with 25-ns access time, 600-mW power dissipation, and 33 sq mm chip size has been developed. Excellent speed-power performance with high packing density has been achieved by an oxide isolation technology in conjunction with novel ECL circuit techniques and IIL flip-flop memory cells, 980 sq microns (35 x 28 microns) in cell size. Development results have shown that IIL flip-flop memory cell is a trump card for assuring achievement of a high-performance large-capacity bipolar RAM, in the above 16K-bit/chip area.
The human hippocampal formation mediates short-term memory of colour-location associations.
Finke, Carsten; Braun, Mischa; Ostendorf, Florian; Lehmann, Thomas-Nicolas; Hoffmann, Karl-Titus; Kopp, Ute; Ploner, Christoph J
2008-01-31
The medial temporal lobe (MTL) has long been considered essential for declarative long-term memory, whereas the fronto-parietal cortex is generally seen as the anatomical substrate of short-term memory. This traditional dichotomy is questioned by recent studies suggesting a possible role of the MTL for short-term memory. In addition, there is no consensus on a possible specialization of MTL sub-regions for memory of associative information. Here, we investigated short-term memory for single features and feature associations in three humans with post-surgical lesions affecting the right hippocampal formation and in 10 healthy controls. We used three delayed-match-to-sample tasks with two delays (900/5000 ms) and three set sizes (2/4/6 items). Subjects were instructed to remember either colours, locations or colour-location associations. In colour-only and location-only conditions, performance of patients did not differ from controls. By contrast, a significant group difference was found in the association condition at 5000 ms delay. This difference was largely independent of set size, thus suggesting that it cannot be explained by the increased complexity of the association condition. These findings show that the hippocampal formation plays a significant role for short-term memory of simple visuo-spatial associations, and suggest a specialization of MTL sub-regions for associative memory.
Testing effects in visual short-term memory: The case of an object's size.
Makovski, Tal
2018-05-29
In many daily activities, we need to form and retain temporary representations of an object's size. Typically, such visual short-term memory (VSTM) representations follow perception and are considered reliable. Here, participants were asked to hold in mind a single simple object for a short duration and to reproduce its size by adjusting the length and width of a test probe. Experiment 1 revealed two powerful findings: First, similar to a recently reported perceptual illusion, participants greatly overestimated the size of open objects - ones with missing boundaries - relative to the same-size fully closed objects. This finding confirms that object boundaries are critical for size perception and memory. Second, and in contrast to perception, even the size of the closed objects was largely overestimated. Both inflation effects were substantial and were replicated and extended in Experiments 2-5. Experiments 6-8 used a different testing procedure to examine whether the overestimation effects are due to inflation of size in VSTM representations or to biases introduced during the reproduction phase. These data showed that while the overestimation of the open objects was repeated, the overestimation of the closed objects was not. Taken together, these findings suggest that similar to perception, only the size representation of open objects is inflated in VSTM. Importantly, they demonstrate the considerable impact of the testing procedure on VSTM tasks and further question the use of reproduction procedures for measuring VSTM.
A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S; Li, Dong
Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memory technologies. To highlight theirmore » similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.« less
Ni-Mn-Ga shape memory nanoactuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohl, M., E-mail: manfred.kohl@kit.edu; Schmitt, M.; Krevet, B.
2014-01-27
To probe finite size effects in ferromagnetic shape memory nanoactuators, double-beam structures with minimum dimensions down to 100 nm are designed, fabricated, and characterized in-situ in a scanning electron microscope with respect to their coupled thermo-elastic and electro-thermal properties. Electrical resistance and mechanical beam bending tests demonstrate a reversible thermal shape memory effect down to 100 nm. Electro-thermal actuation involves large temperature gradients along the nanobeam in the order of 100 K/μm. We discuss the influence of surface and twin boundary energies and explain why free-standing nanoactuators behave differently compared to constrained geometries like films and nanocrystalline shape memory alloys.
Ni-Mn-Ga shape memory nanoactuation
NASA Astrophysics Data System (ADS)
Kohl, M.; Schmitt, M.; Backen, A.; Schultz, L.; Krevet, B.; Fähler, S.
2014-01-01
To probe finite size effects in ferromagnetic shape memory nanoactuators, double-beam structures with minimum dimensions down to 100 nm are designed, fabricated, and characterized in-situ in a scanning electron microscope with respect to their coupled thermo-elastic and electro-thermal properties. Electrical resistance and mechanical beam bending tests demonstrate a reversible thermal shape memory effect down to 100 nm. Electro-thermal actuation involves large temperature gradients along the nanobeam in the order of 100 K/μm. We discuss the influence of surface and twin boundary energies and explain why free-standing nanoactuators behave differently compared to constrained geometries like films and nanocrystalline shape memory alloys.
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
A fast low-power optical memory based on coupled micro-ring lasers
NASA Astrophysics Data System (ADS)
Hill, Martin T.; Dorren, Harmen J. S.; de Vries, Tjibbe; Leijtens, Xaveer J. M.; den Besten, Jan Hendrik; Smalbrugge, Barry; Oei, Yok-Siang; Binsma, Hans; Khoe, Giok-Djan; Smit, Meint K.
2004-11-01
The increasing speed of fibre-optic-based telecommunications has focused attention on high-speed optical processing of digital information. Complex optical processing requires a high-density, high-speed, low-power optical memory that can be integrated with planar semiconductor technology for buffering of decisions and telecommunication data. Recently, ring lasers with extremely small size and low operating power have been made, and we demonstrate here a memory element constructed by interconnecting these microscopic lasers. Our device occupies an area of 18 × 40µm2 on an InP/InGaAsP photonic integrated circuit, and switches within 20ps with 5.5fJ optical switching energy. Simulations show that the element has the potential for much smaller dimensions and switching times. Large numbers of such memory elements can be densely integrated and interconnected on a photonic integrated circuit: fast digital optical information processing systems employing large-scale integration should now be viable.
Microstructure, crystallization and shape memory behavior of titania and yttria co-doped zirconia
Zeng, Xiao Mei; Du, Zehui; Schuh, Christopher A.; ...
2015-12-17
Small volume zirconia ceramics with few or no grain boundaries have been demonstrated recently to exhibit the shape memory effect. To explore the shape memory properties of yttria doped zirconia (YDZ), it is desirable to develop large, microscale grains, instead of submicron grains that result from typical processing of YDZ. In this paper, we have successfully produced single crystal micro-pillars from microscale grains encouraged by the addition of titania during processing. Titania has been doped into YDZ ceramics and its effect on the grain growth, crystallization and microscale elemental distribution of the ceramics have been systematically studied. With 5 mol%more » titania doping, the grain size can be increased up to ~4 μm, while retaining a large quantity of the desired tetragonal phase of zirconia. Finally, micro-pillars machined from tetragonal grains exhibit the expected shape memory effects where pillars made from titania-free YDZ would not.« less
Threshold-voltage modulated phase change heterojunction for application of high density memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Baihan; Tong, Hao, E-mail: tonghao@hust.edu.cn; Qian, Hang
2015-09-28
Phase change random access memory is one of the most important candidates for the next generation non-volatile memory technology. However, the ability to reduce its memory size is compromised by the fundamental limitations inherent in the CMOS technology. While 0T1R configuration without any additional access transistor shows great advantages in improving the storage density, the leakage current and small operation window limit its application in large-scale arrays. In this work, phase change heterojunction based on GeTe and n-Si is fabricated to address those problems. The relationship between threshold voltage and doping concentration is investigated, and energy band diagrams and X-raymore » photoelectron spectroscopy measurements are provided to explain the results. The threshold voltage is modulated to provide a large operational window based on this relationship. The switching performance of the heterojunction is also tested, showing a good reverse characteristic, which could effectively decrease the leakage current. Furthermore, a reliable read-write-erase function is achieved during the tests. Phase change heterojunction is proposed for high-density memory, showing some notable advantages, such as modulated threshold voltage, large operational window, and low leakage current.« less
Reflections on CD-ROM: Bridging the Gap between Technology and Purpose.
ERIC Educational Resources Information Center
Saviers, Shannon Smith
1987-01-01
Provides a technological overview of CD-ROM (Compact Disc-Read Only Memory), an optically-based medium for data storage offering large storage capacity, computer-based delivery system, read-only medium, and economic mass production. CD-ROM database attributes appropriate for information delivery are also reviewed, including large database size,…
Hawkins, Keith A; Tulsky, David S
2004-06-01
Within discrepancy analysis differences between scores are examined for abnormality. Although larger differences are generally associated with rising impairment probabilities, the relationship between discrepancy size and abnormality varies across score pairs in relation to the correlation between the contrasted scores in normal subjects. Examinee ability level also affects the size of discrepancies observed normally. Wechsler Memory Scale-Third Edition (WMS-III) visual index scores correlate only modestly with other Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) and WMS-III index scores; consequently, differences between these scores and others have to be very large before they become unusual, especially for subjects of higher intelligence. The substitution of the Faces subtest by Visual Reproductions within visual memory indexes formed by the combination of WMS-III visual subtests (creating immediate recall, delayed recall, and combined immediate and delayed index scores) results in higher correlation coefficients, and a decline in the discrepancy size required to surpass base rate thresholds for probable impairment. This gain appears not to occur at the cost of a diminished sensitivity to diverse pathologies. New WMS-III discrepancy base rate data are supplied to complement those currently available to clinicians.
Binary mesh partitioning for cache-efficient visualization.
Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno
2010-01-01
One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.
Brooks, Samantha; Prince, Alexis; Stahl, Daniel; Campbell, Iain C; Treasure, Janet
2011-02-01
Maladaptive cognitions about food, weight and shape bias attention, memory and judgment and may be linked to disordered eating behaviour. This paper reviews information processing of food stimuli (words, pictures) in people with eating disorders (ED). PubMed, Ovid, ScienceDirect, PsychInfo, Web of Science, Cochrane Library and Google Scholar were searched to December 2009. 63 studies measured attention, memory and judgment bias towards food stimuli in women with ED. Stroop tasks had sufficient sample size for a meta-analyses and effects ranged from small to medium. Other studies of attention bias had variable effects (e.g. the Dot-Probe task, distracter tasks and Startle Eyeblink Modulation). A meta-analysis of memory bias studies in ED and RE yielded insignificant effect. Effect sizes for judgment bias ranged from negligible to large. People with ED have greater attentional bias to food stimuli than healthy controls (HC). Evidence for a memory and judgment bias in ED is limited. Copyright © 2010 Elsevier Ltd. All rights reserved.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Wide field fluorescence epi-microscopy behind a scattering medium enabled by speckle correlations
NASA Astrophysics Data System (ADS)
Hofer, Matthias; Soeller, Christian; Brasselet, Sophie; Bertolotti, Jacopo
2018-04-01
Fluorescence microscopy is widely used in biological imaging, however scattering from tissues strongly limits its applicability to a shallow depth. In this work we adapt a methodology inspired from stellar speckle interferometry, and exploit the optical memory effect to enable fluorescence microscopy through a turbid layer. We demonstrate efficient reconstruction of micrometer-size fluorescent objects behind a scattering medium in epi-microscopy, and study the specificities of this imaging modality (magnification, field of view, resolution) as compared to traditional microscopy. Using a modified phase retrieval algorithm to reconstruct fluorescent objects from speckle images, we demonstrate robust reconstructions even in relatively low signal to noise conditions. This modality is particularly appropriate for imaging in biological media, which are known to exhibit relatively large optical memory ranges compatible with tens of micrometers size field of views, and large spectral bandwidths compatible with emission fluorescence spectra of tens of nanometers widths.
Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Han, Fang; Scott, Stephen L
Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less
Arithmetic Data Cube as a Data Intensive Benchmark
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabano, Leonid
2003-01-01
Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
The transitional behaviour of avalanches in cohesive granular materials
NASA Astrophysics Data System (ADS)
Quintanilla, M. A. S.; Valverde, J. M.; Castellanos, A.
2006-07-01
We present a statistical analysis of avalanches of granular materials that partially fill a slowly rotated horizontal drum. For large sized noncohesive grains the classical coherent oscillation is reproduced, consisting of a quasi-periodic succession of regularly sized avalanches. As the powder cohesiveness is increased by decreasing the particle size, we observe a gradual crossover to a complex dynamics that resembles the transitional behaviour observed in fusion plasmas. For particle size below ~50 µm, avalanches lose a characteristic size, retain a short term memory and turn gradually decorrelated in the long term as described by a Markov process. In contrast, large grains made cohesive by coating them with adhesive microparticles display a distinct phenomenology, characterized by a quasi-regular succession of well defined small precursors and large relaxation events. The transition from a one-peaked distribution (noncohesive large beads) to a flattened distribution (fine cohesive beads) passing through the two-peaked distribution of cohesive large beads had already been predicted using a coupled-map lattice model, as the relaxation mechanism of grain reorganization becomes dominant to the detriment of inertia.
Discrete memory impairments in largely pure chronic users of MDMA.
Wunderli, Michael D; Vonmoos, Matthias; Fürst, Marina; Schädelin, Katrin; Kraemer, Thomas; Baumgartner, Markus R; Seifritz, Erich; Quednow, Boris B
2017-10-01
Chronic use of 3,4-methylenedioxymethamphetamine (MDMA, "ecstasy") has repeatedly been associated with deficits in working memory, declarative memory, and executive functions. However, previous findings regarding working memory and executive function are inconclusive yet, as in most studies concomitant stimulant use, which is known to affect these functions, was not adequately controlled for. Therefore, we compared the cognitive performance of 26 stimulant-free and largely pure (primary) MDMA users, 25 stimulant-using polydrug MDMA users, and 56 MDMA/stimulant-naïve controls by applying a comprehensive neuropsychological test battery. Neuropsychological tests were grouped into four cognitive domains. Recent drug use was objectively quantified by 6-month hair analyses on 17 substances and metabolites. Considerably lower mean hair concentrations of stimulants (amphetamine, methamphetamine, methylphenidate, cocaine), opioids (morphine, methadone, codeine), and hallucinogens (ketamine, 2C-B) were detected in primary compared to polydrug users, while both user groups did not differ in their MDMA hair concentration. Cohen's d effect sizes for both comparisons, i.e., primary MDMA users vs. controls and polydrug MDMA users vs. controls, were highest for declarative memory (d primary =.90, d polydrug =1.21), followed by working memory (d primary =.52, d polydrug =.96), executive functions (d primary =.46, d polydrug =.86), and attention (d primary =.23, d polydrug =.70). Thus, primary MDMA users showed strong and relatively discrete declarative memory impairments, whereas MDMA polydrug users displayed broad and unspecific cognitive impairments. Consequently, even largely pure chronic MDMA use is associated with decreased performance in declarative memory, while additional deficits in working memory and executive functions displayed by polydrug MDMA users are likely driven by stimulant co-use. Copyright © 2017 Elsevier B.V. and ECNP. All rights reserved.
Greenwood, Pamela M; Schmidt, Kevin; Lin, Ming-Kuan; Lipsky, Robert; Parasuraman, Raja; Jankord, Ryan
2018-06-21
The central role of working memory in IQ and the high heritability of working memory performance motivated interest in identifying the specific genes underlying this heritability. The FTCD (formimidoyltransferase cyclodeaminase) gene was identified as a candidate gene for allelic association with working memory in part from genetic mapping studies of mouse Morris water maze performance. The present study tested variants of this gene for effects on a delayed match-to-sample task of a large sample of younger and older participants. The rs914246 variant, but not the rs914245 variant, of the FTCD gene modulated accuracy in the task for younger, but not older, people under high working memory load. The interaction of haplotype × distance × load had a partial eta squared effect size of 0.015. Analysis of simple main effects had partial eta squared effect sizes ranging from 0.012 to 0.040. A reporter gene assay revealed that the C allele of the rs914246 genotype is functional and a main factor regulating FTCD gene expression. This study extends previous work on the genetics of working memory by revealing that a gene in the glutamatergic pathway modulates working memory in young people but not in older people. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Memory reduction through higher level language hardware
NASA Technical Reports Server (NTRS)
Kerner, H.; Gellman, L.
1972-01-01
Application of large scale integration in computers to reduce size and manufacturing costs and to produce improvements in logic function is discussed. Use of FORTRAN 4 as computer language for this purpose is described. Effectiveness of method in storing information is illustrated.
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
Optoelectronic-cache memory system architecture.
Chiarulli, D M; Levitan, S P
1996-05-10
We present an investigation of the architecture of an optoelectronic cache that can integrate terabit optical memories with the electronic caches associated with high-performance uniprocessors and multiprocessors. The use of optoelectronic-cache memories enables these terabit technologies to provide transparently low-latency secondary memory with frame sizes comparable with disk pages but with latencies that approach those of electronic secondary-cache memories. This enables the implementation of terabit memories with effective access times comparable with the cycle times of current microprocessors. The cache design is based on the use of a smart-pixel array and combines parallel free-space optical input-output to-and-from optical memory with conventional electronic communication to the processor caches. This cache and the optical memory system to which it will interface provide a large random-access memory space that has a lower overall latency than that of magnetic disks and disk arrays. In addition, as a consequence of the high-bandwidth parallel input-output capabilities of optical memories, fault service times for the optoelectronic cache are substantially less than those currently achievable with any rotational media.
Ackermann, Sandra; Hartmann, Francina; Papassotiropoulos, Andreas; de Quervain, Dominique J-F; Rasch, Björn
2015-06-01
Sleep and memory are stable and heritable traits that strongly differ between individuals. Sleep benefits memory consolidation, and the amount of slow wave sleep, sleep spindles, and rapid eye movement sleep have been repeatedly identified as reliable predictors for the amount of declarative and/or emotional memories retrieved after a consolidation period filled with sleep. These studies typically encompass small sample sizes, increasing the probability of overestimating the real association strength. In a large sample we tested whether individual differences in sleep are predictive for individual differences in memory for emotional and neutral pictures. Between-subject design. Cognitive testing took place at the University of Basel, Switzerland. Sleep was recorded at participants' homes, using portable electroencephalograph-recording devices. Nine hundred-twenty-nine healthy young participants (mean age 22.48 ± 3.60 y standard deviation). None. In striking contrast to our expectations as well as numerous previous findings, we did not find any significant correlations between sleep and memory consolidation for pictorial stimuli. Our results indicate that individual differences in sleep are much less predictive for pictorial memory processes than previously assumed and suggest that previous studies using small sample sizes might have overestimated the association strength between sleep stage duration and pictorial memory performance. Future studies need to determine whether intraindividual differences rather than interindividual differences in sleep stage duration might be more predictive for the consolidation of emotional and neutral pictures during sleep. © 2015 Associated Professional Sleep Societies, LLC.
Memory Benchmarks for SMP-Based High Performance Parallel Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A B; de Supinski, B; Mueller, F
2001-11-20
As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less
Size effects on magnetic actuation in Ni-Mn-Ga shape-memory alloys.
Dunand, David C; Müllner, Peter
2011-01-11
The off-stoichiometric Ni(2)MnGa Heusler alloy is a magnetic shape-memory alloy capable of reversible magnetic-field-induced strains (MFIS). These are generated by twin boundaries moving under the influence of an internal stress produced by a magnetic field through the magnetocrystalline anisotropy. While MFIS are very large (up to 10%) for monocrystalline Ni-Mn-Ga, they are near zero (<0.01%) in fine-grained polycrystals due to incompatibilities during twinning of neighboring grains and the resulting internal geometrical constraints. By growing the grains and/or shrinking the sample, the grain size becomes comparable to one or more characteristic sample sizes (film thickness, wire or strut diameter, ribbon width, particle diameter, etc), and the grains become surrounded by free space. This reduces the incompatibilities between neighboring grains and can favor twinning and thus increase the MFIS. This approach was validated recently with very large MFIS (0.2-8%) measured in Ni-Mn-Ga fibers and foams with bamboo grains with dimensions similar to the fiber or strut diameters and in thin plates where grain diameters are comparable to plate thickness. Here, we review processing, micro- and macrostructure, and magneto-mechanical properties of (i) Ni-Mn-Ga powders, fibers, ribbons and films with one or more small dimension, which are amenable to the growth of bamboo grains leading to large MFIS, and (ii) "constructs" from these structural elements (e.g., mats, laminates, textiles, foams and composites). Various strategies are proposed to accentuate this geometric effect which enables large MFIS in polycrystalline Ni-Mn-Ga by matching grain and sample sizes.
New library buildings: the Health Sciences Library, Memorial University of Newfoundland, St. John's.
Fredericksen, R B
1979-01-01
The new Health Sciences Library of Memorial University of Newfoundland is described and illustrated. A library facility that forms part of a larger health sciences center, this is a medium-sized academic health sciences library built on a single level. Along with a physical description of the library and its features, the concepts of single-level libraries, phased occupancy, and the project management approach to building a large health center library are discussed in detail. Images PMID:476319
Phonological skills and their role in learning to read: a meta-analytic review.
Melby-Lervåg, Monica; Lyster, Solveig-Alma Halaas; Hulme, Charles
2012-03-01
The authors report a systematic meta-analytic review of the relationships among 3 of the most widely studied measures of children's phonological skills (phonemic awareness, rime awareness, and verbal short-term memory) and children's word reading skills. The review included both extreme group studies and correlational studies with unselected samples (235 studies were included, and 995 effect sizes were calculated). Results from extreme group comparisons indicated that children with dyslexia show a large deficit on phonemic awareness in relation to typically developing children of the same age (pooled effect size estimate: -1.37) and children matched on reading level (pooled effect size estimate: -0.57). There were significantly smaller group deficits on both rime awareness and verbal short-term memory (pooled effect size estimates: rime skills in relation to age-matched controls, -0.93, and reading-level controls, -0.37; verbal short-term memory skills in relation to age-matched controls, -0.71, and reading-level controls, -0.09). Analyses of studies of unselected samples showed that phonemic awareness was the strongest correlate of individual differences in word reading ability and that this effect remained reliable after controlling for variations in both verbal short-term memory and rime awareness. These findings support the pivotal role of phonemic awareness as a predictor of individual differences in reading development. We discuss whether such a relationship is a causal one and the implications of research in this area for current approaches to the teaching of reading and interventions for children with reading difficulties.
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
Fast distributed large-pixel-count hologram computation using a GPU cluster.
Pan, Yuechao; Xu, Xuewu; Liang, Xinan
2013-09-10
Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen
2006-04-01
Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.
TeraStitcher - A tool for fast automatic 3D-stitching of teravoxel-sized microscopy images
2012-01-01
Background Further advances in modern microscopy are leading to teravoxel-sized tiled 3D images at high resolution, thus increasing the dimension of the stitching problem of at least two orders of magnitude. The existing software solutions do not seem adequate to address the additional requirements arising from these datasets, such as the minimization of memory usage and the need to process just a small portion of data. Results We propose a free and fully automated 3D Stitching tool designed to match the special requirements coming out of teravoxel-sized tiled microscopy images that is able to stitch them in a reasonable time even on workstations with limited resources. The tool was tested on teravoxel-sized whole mouse brain images with micrometer resolution and it was also compared with the state-of-the-art stitching tools on megavoxel-sized publicy available datasets. This comparison confirmed that the solutions we adopted are suited for stitching very large images and also perform well on datasets with different characteristics. Indeed, some of the algorithms embedded in other stitching tools could be easily integrated in our framework if they turned out to be more effective on other classes of images. To this purpose, we designed a software architecture which separates the strategies that use efficiently memory resources from the algorithms which may depend on the characteristics of the acquired images. Conclusions TeraStitcher is a free tool that enables the stitching of Teravoxel-sized tiled microscopy images even on workstations with relatively limited resources of memory (<8 GB) and processing power. It exploits the knowledge of approximate tile positions and uses ad-hoc strategies and algorithms designed for such very large datasets. The produced images can be saved into a multiresolution representation to be efficiently retrieved and processed. We provide TeraStitcher both as standalone application and as plugin of the free software Vaa3D. PMID:23181553
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á
2017-12-01
We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to <74 years) were presented with three different portion sizes of five foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was <30% after compensating the effect of potential outliers with winsorisation. Picture series for all five food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).
Inexpensive Animal Learning Exercises for Huge Introductory Laboratory Classes
ERIC Educational Resources Information Center
Katz, Albert N.
1978-01-01
Suggests use of the planarian D. Dorotocephala, an animal 20 mm in size, in order to provide inexpensive lab experiences for students in large introductory psychology courses. The animal can be used to study perception, memory, behavior modification, and group processes. (Author/AV)
NASA Astrophysics Data System (ADS)
Wen, Xixing; Zeng, Xiangbin; Zheng, Wenjun; Liao, Wugang; Feng, Feng
2015-01-01
The charging/discharging behavior of Si quantum dots (QDs) embedded in amorphous silicon carbide (a-SiCx) was investigated based on the Al/insulating layer/Si QDs embedded in a-SiCx/SiO2/p-Si (metal-insulator-quantum dots-oxide-silicon) multilayer structure by capacitance-voltage (C-V) and conductance-voltage (G-V) measurements. Transmission electron microscopy and Raman scattering spectroscopy measurements reveal the microstructure and distribution of Si QDs. The occurrence and shift of conductance peaks indicate the carrier transfer and the charging/discharging behavior of Si QDs. The multilayer structure shows a large memory window of 5.2 eV at ±8 V sweeping voltage. Analysis of the C-V and G-V results allows a quantification of the Coulomb charging energy and the trapped charge density associated with the charging/discharging behavior. It is found that the memory window is related to the size effect, and Si QDs with large size or low Coulomb charging energy can trap two or more electrons by changing the charging voltage. Meanwhile, the estimated lower potential barrier height between Si QD and a-SiCx, and the lower Coulomb charging energy of Si QDs could enhance the charging and discharging effect of Si QDs and lead to an enlarged memory window. Further studies of the charging/discharging mechanism of Si QDs embedded in a-SiCx can promote the application of Si QDs in low-power consumption semiconductor memory devices.
Other drug use does not impact cognitive impairments in chronic ketamine users.
Zhang, Chenxi; Tang, Wai Kwong; Liang, Hua Jun; Ungvari, Gabor Sandor; Lin, Shih-Ku
2018-05-01
Ketamine abuse causes cognitive impairments, which negatively impact on users' abstinence, prognosis, and quality of life. of cognitive impairments in chronic ketamine users have been inconsistent across studies, possibly due to the small sample sizes and the confounding effects of concomitant use of other illicit drugs. This study investigated the cognitive impairment and its related factors in chronic ketamine users with a large sample size and explored the impact of another drug use on cognitive functions. Cognitive functions, including working, verbal and visual memory and executive functions were assessed in ketamine users: 286 non-heavy other drug users and 279 heavy other drug users, and 261 healthy controls. Correlations between cognitive impairment and patterns of ketamine use were analysed. Verbal and visual memory were impaired, but working memory and executive functions were intact for all ketamine users. No significant cognitive differences were found between the two ketamine groups. Greater number of days of ketamine use in the past month was associated with worse visual memory performance in non-heavy other drug users. Higher dose of ketamine use was associated with worse short-term verbal memory in heavy other drug users. Verbal and visual memory are impaired in chronic ketamine users. Other drug use appears to have no impact on ketamine users' cognitive performance. Copyright © 2018. Published by Elsevier B.V.
The role of central attention in retrieval from visual short-term memory.
Magen, Hagit
2017-04-01
The role of central attention in visual short-term memory (VSTM) encoding and maintenance is well established, yet its role in retrieval has been largely unexplored. This study examined the involvement of central attention in retrieval from VSTM using a dual-task paradigm. Participants performed a color change-detection task. Set size varied between 1 and 3 items, and the memory sample was maintained for either a short or a long delay period. A secondary tone discrimination task was introduced at the end of the delay period, shortly before the appearance of a central probe, and occupied central attention while participants were searching within VSTM representations. Similarly to numerous previous studies, reaction time increased as a function of set size reflecting the occurrence of a capacity-limited memory search. When the color targets were maintained over a short delay, memory was searched for the most part without the involvement of central attention. However, with a longer delay period, the search relied entirely on the operation of central attention. Taken together, this study demonstrates that central attention is involved in retrieval from VSTM, but the extent of its involvement depends on the duration of the delay period. Future studies will determine whether the type of memory search (parallel or serial) carried out during retrieval depends on the nature of the attentional mechanism involved the task.
Cognitive and memory training in adults at risk of dementia: A Systematic Review
2011-01-01
Background Effective non-pharmacological cognitive interventions to prevent Alzheimer's dementia or slow its progression are an urgent international priority. The aim of this review was to evaluate cognitive training trials in individuals with mild cognitive impairment (MCI), and evaluate the efficacy of training in memory strategies or cognitive exercises to determine if cognitive training could benefit individuals at risk of developing dementia. Methods A systematic review of eligible trials was undertaken, followed by effect size analysis. Cognitive training was differentiated from other cognitive interventions not meeting generally accepted definitions, and included both cognitive exercises and memory strategies. Results Ten studies enrolling a total of 305 subjects met criteria for cognitive training in MCI. Only five of the studies were randomized controlled trials. Meta-analysis was not considered appropriate due to the heterogeneity of interventions. Moderate effects on memory outcomes were identified in seven trials. Cognitive exercises (relative effect sizes ranged from .10 to 1.21) may lead to greater benefits than memory strategies (.88 to -1.18) on memory. Conclusions Previous conclusions of a lack of efficacy for cognitive training in MCI may have been influenced by not clearly defining the intervention. Our systematic review found that cognitive exercises can produce moderate-to-large beneficial effects on memory-related outcomes. However, the number of high quality RCTs remains low, and so further trials must be a priority. Several suggestions for the better design of cognitive training trials are provided. PMID:21942932
Coordination of size-control, reproduction and generational memory in freshwater planarians
NASA Astrophysics Data System (ADS)
Yang, Xingbo; Kaj, Kelson; Schwab, David; Collins, Eva-Maria
Uncovering the mechanisms that control size, growth, and division rates of systems reproducing through binary division means understanding basic principles of their life cycle. Recent work has focused on how division rates are regulated in bacteria and yeast, but this question has not yet been addressed in more complex, multicellular organisms. We have acquired a unique large-scale data set on the growth and asexual reproduction of two freshwater planarian species, Dugesia japonica and Dugesia tigrina, which reproduce by transverse fission and succeeding regeneration of head and tail pieces into new worms. We developed a new additive theoretical model that mixes multiple size control strategies based on worm size, growth, and waiting time. Our model quantifies the proportions of each strategy in the mixed dynamics, revealing the ability of the two planarian species to utilize different strategies in a coordinated manner for size control. Additionally, we found that head and tail offspring of both species employ different mechanisms to monitor and trigger their reproduction cycles. Finally, we show that generation-dependent memory effects in planarians need to be taken into account to accurately capture the experimental data.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NiMnGa/Si Shape Memory Bimorph Nanoactuation
NASA Astrophysics Data System (ADS)
Lambrecht, Franziska; Lay, Christian; Aseguinolaza, Iván R.; Chernenko, Volodymyr; Kohl, Manfred
2016-12-01
The size dependences of thermal bimorph and shape memory effect of nanoscale shape memory alloy (SMA)/Si bimorph actuators are investigated in situ in a scanning electron microscope and by finite element simulations. By combining silicon nanomachining and magnetron sputtering, freestanding NiMnGa/Si bimorph cantilever structures with film/substrate thickness of 200/250 nm and decreasing lateral dimensions are fabricated. Electrical resistance and mechanical beam bending tests upon direct Joule heating demonstrate martensitic phase transformation and reversible thermal bimorph effect, respectively. Corresponding characteristics are strongly affected by the large temperature gradient in the order of 50 K/µm forming along the nano bimorph cantilever upon electro-thermal actuation, which, in addition, depends on the size-dependent heat conductivity in the Si nano layer. Furthermore, the martensitic transformation temperatures show a size-dependent decrease by about 40 K for decreasing lateral dimensions down to 200 nm. The effects of heating temperature and stress distribution on the nanoactuation performance are analyzed by finite element simulations revealing thickness ratio of SMA/Si of 90/250 nm to achieve an optimum SME. Differential thermal expansion and thermo-elastic effects are discriminated by comparative measurements and simulations on Ni/Si bimorph reference actuators.
Effects of memory load on hemispheric asymmetries of colour memory.
Clapp, Wes; Kirk, Ian J; Hausmann, Markus
2007-03-01
Hemispheric asymmetries in colour perception have been a matter of debate for some time. Recent evidence suggests that lateralisation of colour processing may be largely task specific. Here we investigated hemispheric asymmetries during different types and phases of a delayed colour-matching (recognition) memory task. A total of 11 male and 12 female right-handed participants performed colour-memory tasks. The task involved presentation of a set of colour stimuli (encoding), and subsequent indication (forced choice) of which colours in a larger set had previously appeared at the retrieval or recognition phase. The effect of memory load (set size), and the effect of lateralisation at the encoding or retrieval phases were investigated. Overall, the results indicate a right hemisphere advantage in colour processing, which was particularly pronounced in high memory load conditions, and was seen in males rather than female participants. The results suggest that verbal (mnemonic) strategies can significantly affect the magnitude of hemispheric asymmetries in a non-verbal task.
Improved cache performance in Monte Carlo transport calculations using energy banding
NASA Astrophysics Data System (ADS)
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
Ackermann, Sandra; Hartmann, Francina; Papassotiropoulos, Andreas; de Quervain, Dominique J.F.; Rasch, Björn
2015-01-01
Study Objectives: Sleep and memory are stable and heritable traits that strongly differ between individuals. Sleep benefits memory consolidation, and the amount of slow wave sleep, sleep spindles, and rapid eye movement sleep have been repeatedly identified as reliable predictors for the amount of declarative and/or emotional memories retrieved after a consolidation period filled with sleep. These studies typically encompass small sample sizes, increasing the probability of overestimating the real association strength. In a large sample we tested whether individual differences in sleep are predictive for individual differences in memory for emotional and neutral pictures. Design: Between-subject design. Setting: Cognitive testing took place at the University of Basel, Switzerland. Sleep was recorded at participants' homes, using portable electroencephalograph-recording devices. Participants: Nine hundred-twenty-nine healthy young participants (mean age 22.48 ± 3.60 y standard deviation). Interventions: None. Measurements and results: In striking contrast to our expectations as well as numerous previous findings, we did not find any significant correlations between sleep and memory consolidation for pictorial stimuli. Conclusions: Our results indicate that individual differences in sleep are much less predictive for pictorial memory processes than previously assumed and suggest that previous studies using small sample sizes might have overestimated the association strength between sleep stage duration and pictorial memory performance. Future studies need to determine whether intraindividual differences rather than interindividual differences in sleep stage duration might be more predictive for the consolidation of emotional and neutral pictures during sleep. Citation: Ackermann S, Hartmann F, Papassotiropoulos A, de Quervain DJF, Rasch B. No associations between interindividual differences in sleep parameters and episodic memory consolidation. SLEEP 2015;38(6):951–959. PMID:25325488
Photonic content-addressable memory system that uses a parallel-readout optical disk
NASA Astrophysics Data System (ADS)
Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.
1995-11-01
We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.
Optical memory effect from polarized Laguerre-Gaussian light beam in light-scattering turbid media
NASA Astrophysics Data System (ADS)
Shumyatsky, Pavel; Milione, Giovanni; Alfano, Robert R.
2014-06-01
Propagation effects of polarized Laguerre-Gaussian light with different orbital angular momentum (L) in turbid media are described. The optical memory effect in scattering media consisting of small and large size (compared to the wavelength) scatterers is investigated for scattered polarized light. Imaging using polarized laser modes with a varying orbital strength L-parameter was performed. The backscattered image quality (contrast) was enhanced by more than an order of magnitude using circularly polarized light when the concentration of scatterers was close to invisibility of the object.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Memory Effects on Movement Behavior in Animal Foraging
Bracis, Chloe; Gurarie, Eliezer; Van Moorter, Bram; Goodwin, R. Andrew
2015-01-01
An individual’s choices are shaped by its experience, a fundamental property of behavior important to understanding complex processes. Learning and memory are observed across many taxa and can drive behaviors, including foraging behavior. To explore the conditions under which memory provides an advantage, we present a continuous-space, continuous-time model of animal movement that incorporates learning and memory. Using simulation models, we evaluate the benefit memory provides across several types of landscapes with variable-quality resources and compare the memory model within a nested hierarchy of simpler models (behavioral switching and random walk). We find that memory almost always leads to improved foraging success, but that this effect is most marked in landscapes containing sparse, contiguous patches of high-value resources that regenerate relatively fast and are located in an otherwise devoid landscape. In these cases, there is a large payoff for finding a resource patch, due to size, value, or locational difficulty. While memory-informed search is difficult to differentiate from other factors using solely movement data, our results suggest that disproportionate spatial use of higher value areas, higher consumption rates, and consumption variability all point to memory influencing the movement direction of animals in certain ecosystems. PMID:26288228
Memory Effects on Movement Behavior in Animal Foraging.
Bracis, Chloe; Gurarie, Eliezer; Van Moorter, Bram; Goodwin, R Andrew
2015-01-01
An individual's choices are shaped by its experience, a fundamental property of behavior important to understanding complex processes. Learning and memory are observed across many taxa and can drive behaviors, including foraging behavior. To explore the conditions under which memory provides an advantage, we present a continuous-space, continuous-time model of animal movement that incorporates learning and memory. Using simulation models, we evaluate the benefit memory provides across several types of landscapes with variable-quality resources and compare the memory model within a nested hierarchy of simpler models (behavioral switching and random walk). We find that memory almost always leads to improved foraging success, but that this effect is most marked in landscapes containing sparse, contiguous patches of high-value resources that regenerate relatively fast and are located in an otherwise devoid landscape. In these cases, there is a large payoff for finding a resource patch, due to size, value, or locational difficulty. While memory-informed search is difficult to differentiate from other factors using solely movement data, our results suggest that disproportionate spatial use of higher value areas, higher consumption rates, and consumption variability all point to memory influencing the movement direction of animals in certain ecosystems.
Different effects of executive and visuospatial working memory on visual consciousness.
De Loof, Esther; Poppe, Louise; Cleeremans, Axel; Gevers, Wim; Van Opstal, Filip
2015-11-01
Consciousness and working memory are two widely studied cognitive phenomena. Although they have been closely tied on a theoretical and neural level, empirical work that investigates their relation is largely lacking. In this study, the relationship between visual consciousness and different working memory components is investigated by using a dual-task paradigm. More specifically, while participants were performing a visual detection task to measure their visual awareness threshold, they had to concurrently perform either an executive or visuospatial working memory task. We hypothesized that visual consciousness would be hindered depending on the type and the size of the load in working memory. Results showed that maintaining visuospatial content in working memory hinders visual awareness, irrespective of the amount of information maintained. By contrast, the detection threshold was progressively affected under increasing executive load. Interestingly, increasing executive load had a generic effect on detection speed, calling into question whether its obstructing effect is specific to the visual awareness threshold. Together, these results indicate that visual consciousness depends differently on executive and visuospatial working memory.
Contact and other-race effects in configural and component processing of faces.
Rhodes, Gillian; Ewing, Louise; Hayward, William G; Maurer, Daphne; Mondloch, Catherine J; Tanaka, James W
2009-11-01
Other-race faces are generally recognized more poorly than own-race faces. There has been a long-standing interest in the extent to which differences in contact contribute to this other-race effect (ORE). Here, we examined the effect of contact on two distinct aspects of face memory, memory for configuration and for components, both of which are better for own-race than other-race faces. Configural and component memory were measured using recognition memory tests with intact study faces and blurred (isolates memory for configuration) and scrambled (isolates memory for components) test faces, respectively. Our participants were a large group of ethnically Chinese individuals who had resided in Australia for varying lengths of time, from a few weeks to 26 years. We found that time in a Western country significantly (negatively) predicted the size of the ORE for configural, but not component, memory. There was also a trend for earlier age of arrival to predict smaller OREs in configural, but not component, memory. These results suggest that memory for configural information in other-race faces improves with experience with such faces. However, as found for recognition memory generally, the contact effects were small, indicating that other factors must play a substantial role in cross-race differences in face memory.
Grain Size of Recall Practice for Lengthy Text Material: Fragile and Mysterious Effects on Memory
ERIC Educational Resources Information Center
Wissman, Kathryn T.; Rawson, Katherine A.
2015-01-01
The current research evaluated the extent to which the grain size of recall practice for lengthy text material affects recall during practice and subsequent memory. The "grain size hypothesis" states that a smaller vs. larger grain size will increase retrieval success during practice that in turn will enhance subsequent memory for…
Sequential associative memory with nonuniformity of the layer sizes.
Teramae, Jun-Nosuke; Fukai, Tomoki
2007-01-01
Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.
Does neighborhood size really cause the word length effect?
Guitard, Dominic; Saint-Aubin, Jean; Tehan, Gerald; Tolan, Anne
2018-02-01
In short-term serial recall, it is well-known that short words are remembered better than long words. This word length effect has been the cornerstone of the working memory model and a benchmark effect that all models of immediate memory should account for. Currently, there is no consensus as to what determines the word length effect. Jalbert and colleagues (Jalbert, Neath, Bireta, & Surprenant, 2011a; Jalbert, Neath, & Surprenant, 2011b) suggested that neighborhood size is one causal factor. In six experiments we systematically examined their suggestion. In Experiment 1, with an immediate serial recall task, multiple word lengths, and a large pool of words controlled for neighborhood size, the typical word length effect was present. In Experiments 2 and 3, with an order reconstruction task and words with either many or few neighbors, we observed the typical word length effect. In Experiment 4 we tested the hypothesis that the previous abolition of the word length effect when neighborhood size was controlled was due to a confounded factor: frequency of orthographic structure. As predicted, we reversed the word length effect when using short words with less frequent orthographic structures than the long words, as was done in both of Jalbert et al.'s studies. In Experiments 5 and 6, we again observed the typical word length effect, even if we controlled for neighborhood size and frequency of orthographic structure. Overall, the results were not consistent with the predictions of Jalbert et al. and clearly showed a large and reliable word length effect after controlling for neighborhood size.
Liu, Dengtang; Ji, Chengfeng; Zhuo, Kaiming; Song, Zhenhua; Wang, Yingchan; Mei, Li; Zhu, Dianming; Xiang, Qiong; Chen, Tianyi; Yang, Zhilei; Zhu, Guang; Wang, Ya; Cheung, Eric Fc; Xiang, Yu-Tao; Fan, Xiaoduo; Chan, Raymond Ck; Xu, Yifeng; Jiang, Kaida
2017-03-01
Schizophrenia is associated with impairment in prospective memory, the ability to remember to carry out an intended action in the future. It has been established that cue identification (detection of the cue event signaling that an intended action should be performed) and intention retrieval (retrieval of an intention from long-term memory following the recognition of a prospective cue) are two important processes underlying prospective memory. The purpose of this study was to examine prospective memory deficit and underlying cognitive processes in patients with first-episode schizophrenia. This study examined cue identification and intention retrieval components of event-based prospective memory using a dual-task paradigm in 30 patients with first-episode schizophrenia and 30 healthy controls. All participants were also administered a set of tests assessing working memory and retrospective memory. Both cue identification and intention retrieval were impaired in patients with first-episode schizophrenia compared with healthy controls ( ps < 0.05), with a large effect size for cue identification (Cohen's d = 0.98) and a medium effect size for intention retrieval (Cohen's d = 0.62). After controlling for working memory and retrospective memory, the difference in cue identification between patients and healthy controls remained significant. However, the difference in intention retrieval between the two groups was no longer significant. In addition, there was a significant inverse relationship between cue identification and negative symptoms ( r = -0.446, p = 0.013) in the patient group. These findings suggest that both cue identification and intention retrieval in event-based prospective memory are impaired in patients with first-episode schizophrenia. Cue identification and intention retrieval could be potentially used as biomarkers for early detection and treatment prognosis of schizophrenia. In addition, addressing cue identification deficit through cognitive enhancement training may potentially improve negative symptoms as well.
Wang, Xiaoli; Logie, Robert H; Jarrold, Christopher
2016-08-01
Neuropsychological studies of verbal short-term memory have often focused on two signature effects - phonological similarity and word length - the absence of which has been taken to indicate problems in phonological storage and rehearsal respectively. In the present study we present a possible alternative reading of such data, namely that the absence of these effects can follow as a consequence of an individual's poor level of recall. Data from a large normative sample of 251 adult participants were re-analyzed under the assumption that the size of phonological similarity and word length effects are proportional to an individual's overall level of recall. For both manipulations, when proportionalized effects were plotted against memory span, the same function fit the data in both auditory and visual presentation conditions. Furthermore, two additional sets of single-case data were broadly comparable to those that would be expected for an individual's level of verbal short-term memory performance albeit with some variation across tasks. These findings indicate that the absolute magnitude of phonological similarity and word length effects depends on overall levels of recall, and that these effects are necessarily eliminated at low levels of verbal short-term memory performance. This has implications for how one interprets any variation in the size of these effects, and raises serious questions about the causal direction of any relationship between impaired verbal short-term memory and the absence of phonological similarity or word length effects.
A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.
Peng, Chao; Sahani, Sandip; Rushing, John
2017-10-01
We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
Ekkel, M R; van Lier, R; Steenbergen, B
2017-03-01
Echolocation can be beneficial for the orientation and mobility of visually impaired people. Research has shown considerable individual differences for acquiring this skill. However, individual characteristics that affect the learning of echolocation are largely unknown. In the present study, we examined individual factors that are likely to affect learning to echolocate: sustained and divided attention, working memory, and spatial abilities. To that aim, sighted participants with normal hearing performed an echolocation task that was adapted from a previously reported size-discrimination task. In line with existing studies, we found large individual differences in echolocation ability. We also found indications that participants were able to improve their echolocation ability. Furthermore, we found a significant positive correlation between improvement in echolocation and sustained and divided attention, as measured in the PASAT. No significant correlations were found with our tests regarding working memory and spatial abilities. These findings may have implications for the development of guidelines for training echolocation that are tailored to the individual with a visual impairment.
Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sczyrba, Alex; Pratap, Abhishek; Canon, Shane
2011-03-22
Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less
Elastocaloric cooling of additive manufactured shape memory alloys with large latent heat
NASA Astrophysics Data System (ADS)
Hou, Huilong; Simsek, Emrah; Stasak, Drew; Hasan, Naila Al; Qian, Suxin; Ott, Ryan; Cui, Jun; Takeuchi, Ichiro
2017-10-01
The stress-induced martensitic phase transformation of shape memory alloys (SMAs) is the basis for elastocaloric cooling. Here we employ additive manufacturing to fabricate TiNi SMAs, and demonstrate compressive elastocaloric cooling in the TiNi rods with transformation latent heat as large as 20 J g-1. Adiabatic compression on as-fabricated TiNi displays cooling ΔT as high as -7.5 °C with recoverable superelastic strain up to 5%. Unlike conventional SMAs, additive manufactured TiNi SMAs exhibit linear superelasticity with narrow hysteresis in stress-strain curves under both adiabatic and isothermal conditions. Microstructurally, we find that there are Ti2Ni precipitates typically one micron in size with a large aspect ratio enclosing the TiNi matrix. A stress transfer mechanism between reversible phase transformation in the TiNi matrix and mechanical deformation in Ti2Ni precipitates is believed to be the origin of the unique superelasticity behavior.
Missing from the "Minority Mainstream": Pahari-Speaking Diaspora in Britain
ERIC Educational Resources Information Center
Hussain, Serena
2015-01-01
Pahari speakers form one of the largest ethnic non-European diasporas in Britain. Despite their size and over 60 years of settlement on British shores, the diaspora is shrouded by confusion regarding official and unofficial categorisations, remaining largely misunderstood as a collective with a shared ethnolinguistic memory. This has had…
Holographic Memory Devices with Injection Lasers,
1981-02-09
0.1%. In addition, the large size of these lasers and the necessity of using high voltages for their charging make it more difficult to construct minia...IfflIaev ep*y isych kazeruw na woasoawimaj fazowych pwauuu, Kwant . Elekir., I (3974),1 .-. 12. M.A. Msjotcz . W.D. Samoiow, 1Pxd&**~W p pri zapial i
Distributed Name Servers: Naming and Caching in Large Distributed Computing Environments
1985-12-01
transmission rate of the communication medium1, transmission over a 56K bps line costs approx- imately 54r, and similarly, communication over a 9.6K...memories for modem computer systems attempt to maximize the hit ratio for a fixed-size cache by utilizing intelligent cache replacement algorithms
Rey, Amandine Eve; Riou, Benoit; Versace, Rémy
2014-01-01
Based on recent behavioral and neuroimaging data suggesting that memory and perception are partially based on the same sensorimotor system, the theoretical aim of the present study was to show that it is difficult to dissociate memory mechanisms from perceptual mechanisms other than on the basis of the presence (perceptual processing) or absence (memory processing) of the characteristics of the objects involved in the processing. In line with this assumption, two experiments using an adaptation of the Ebbinghaus illusion paradigm revealed similar effects irrespective of whether the size difference between the inner circles and the surrounding circles was manipulated perceptually (the size difference was perceptually present, Experiment 1) or merely reactivated in memory (the difference was perceptually absent, Experiment 2).
Interface traps and quantum size effects on the retention time in nanoscale memory devices
2013-01-01
Based on the analysis of Poisson equation, an analytical surface potential model including interface charge density for nanocrystalline (NC) germanium (Ge) memory devices with p-type silicon substrate has been proposed. Thus, the effects of Pb defects at Si(110)/SiO2, Si(111)/SiO2, and Si(100)/SiO2 interfaces on the retention time have been calculated after quantum size effects have been considered. The results show that the interface trap density has a large effect on the electric field across the tunneling oxide layer and leakage current. This letter demonstrates that the retention time firstly increases with the decrease in diameter of NC Ge and then rapidly decreases with the diameter when it is a few nanometers. This implies that the interface defects, its energy distribution, and the NC size should be seriously considered in the aim to improve the retention time from different technological processes. The experimental data reported in the literature support the theoretical expectation. PMID:23984827
Mind over platter: pre-meal planning and the control of meal size in humans.
Brunstrom, J M
2014-07-01
It is widely accepted that meal size is governed by psychological and physiological processes that generate fullness towards the end of a meal. However, observations of natural eating behaviour suggest that this preoccupation with within-meal events may be misplaced and that the role of immediate post-ingestive feedback (for example, gastric stretch) has been overstated. This review considers the proposition that the locus of control is more likely to be expressed in decisions about portion size, before a meal begins. Consistent with this idea, we have discovered that people are extremely adept at estimating the 'expected satiety' and 'expected satiation' of different foods. These expectations are learned over time and they are highly correlated with the number of calories that end up on our plate. Indeed, across a range of foods, the large variation in expected satiety/satiation may be a more important determinant of meal size than relatively subtle differences in palatability. Building on related advances, it would also appear that memory for portion size has an important role in generating satiety after a meal has been consumed. Together, these findings expose the importance of planning and episodic memory in the control of appetite and food intake in humans.
NASA Astrophysics Data System (ADS)
Noé, Pierre; Vallée, Christophe; Hippert, Françoise; Fillot, Frédéric; Raty, Jean-Yves
2018-01-01
Chalcogenide phase-change materials (PCMs), such as Ge-Sb-Te alloys, have shown outstanding properties, which has led to their successful use for a long time in optical memories (DVDs) and, recently, in non-volatile resistive memories. The latter, known as PCM memories or phase-change random access memories (PCRAMs), are the most promising candidates among emerging non-volatile memory (NVM) technologies to replace the current FLASH memories at CMOS technology nodes under 28 nm. Chalcogenide PCMs exhibit fast and reversible phase transformations between crystalline and amorphous states with very different transport and optical properties leading to a unique set of features for PCRAMs, such as fast programming, good cyclability, high scalability, multi-level storage capability, and good data retention. Nevertheless, PCM memory technology has to overcome several challenges to definitively invade the NVM market. In this review paper, we examine the main technological challenges that PCM memory technology must face and we illustrate how new memory architecture, innovative deposition methods, and PCM composition optimization can contribute to further improvements of this technology. In particular, we examine how to lower the programming currents and increase data retention. Scaling down PCM memories for large-scale integration means the incorporation of the PCM into more and more confined structures and raises materials science issues in order to understand interface and size effects on crystallization. Other materials science issues are related to the stability and ageing of the amorphous state of PCMs. The stability of the amorphous phase, which determines data retention in memory devices, can be increased by doping the PCM. Ageing of the amorphous phase leads to a large increase of the resistivity with time (resistance drift), which has up to now hindered the development of ultra-high multi-level storage devices. A review of the current understanding of all these issues is provided from a materials science point of view.
Sex differences in a human analogue of the Radial Arm Maze: the "17-Box Maze Test".
Rahman, Qazi; Abrahams, Sharon; Jussab, Fardin
2005-08-01
This study investigated sex differences in spatial memory using a human analogue of the Radial Arm Maze: a revision on the Nine Box Maze originally developed by called the 17-Box Maze Test herein. The task encourages allocentric spatial processing, dissociates object from spatial memory, and incorporates a within-participants design to provide measures of location and object, working and reference memory. Healthy adult males and females (26 per group) were administered the 17-Box Maze Test, as well as mental rotation and a verbal IQ test. Females made significantly fewer errors on this task than males. However, post hoc analysis revealed that the significant sex difference was specific to object, rather than location, memory measures. These were medium to large effect sizes. The findings raise the issue of task- and component-specific sexual dimorphism in cognitive mapping.
NASA Astrophysics Data System (ADS)
Bjorklund, E.
1994-12-01
In the 1970s, when computers were memory limited, operating system designers created the concept of "virtual memory", which gave users the ability to address more memory than physically existed. In the 1990s, many large control systems have the potential of becoming data limited. We propose that many of the principles behind virtual memory systems (working sets, locality, caching and clustering) can also be applied to data-limited systems, creating, in effect, "virtual data systems". At the Los Alamos National Laboratory's Clinton P. Anderson Meson Physics Facility (LAMPF), we have applied these principles to a moderately sized (10 000 data points) data acquisition and control system. To test the principles, we measured the system's performance during tune-up, production, and maintenance periods. In this paper, we present a general discussion of the principles of a virtual data system along with some discussion of our own implementation and the results of our performance measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hull, L.C.
The Prickett and Lonnquist two-dimensional groundwater model has been programmed for the Apple II minicomputer. Both leaky and nonleaky confined aquifers can be simulated. The model was adapted from the FORTRAN version of Prickett and Lonnquist. In the configuration presented here, the program requires 64 K bits of memory. Because of the large number of arrays used in the program, and memory limitations of the Apple II, the maximum grid size that can be used is 20 rows by 20 columns. Input to the program is interactive, with prompting by the computer. Output consists of predicted lead values at themore » row-column intersections (nodes).« less
Chunks in expert memory: evidence for the magical number four ... or is it two?
Gobet, Fernand; Clarkson, Gary
2004-11-01
This study aims to test the divergent predictions of the chunking theory (Chase & Simon, 1973) and template theory (Gobet & Simon, 1996a, 2000) with respect to the number of chunks held in visual short-term memory and the size of chunks used by experts. We presented game and random chessboards in both a copy and a recall task. In a within-subject design, the stimuli were displayed using two presentation media: (a) physical board and pieces, as in Chase and Simon's (1973) study; and (b) a computer display, as in Gobet and Simon's (1998) study. Results show that, in most cases, no more than three chunks were replaced in the recall task, as predicted by template theory. In addition, with game positions in the computer condition, chess Masters replaced very large chunks (up to 15 pieces), again in line with template theory. Overall, the results suggest that the original chunking theory overestimated short-term memory capacity and underestimated the size of chunks used, in particular with Masters. They also suggest that Cowan's (2001) proposal that STM holds four chunks may be an overestimate.
Memory effects in nanoparticle dynamics and transport
NASA Astrophysics Data System (ADS)
Sanghi, Tarun; Bhadauria, Ravi; Aluru, N. R.
2016-10-01
In this work, we use the generalized Langevin equation (GLE) to characterize and understand memory effects in nanoparticle dynamics and transport. Using the GLE formulation, we compute the memory function and investigate its scaling with the mass, shape, and size of the nanoparticle. It is observed that changing the mass of the nanoparticle leads to a rescaling of the memory function with the reduced mass of the system. Further, we show that for different mass nanoparticles it is the initial value of the memory function and not its relaxation time that determines the "memory" or "memoryless" dynamics. The size and the shape of the nanoparticle are found to influence both the functional-form and the initial value of the memory function. For a fixed mass nanoparticle, increasing its size enhances the memory effects. Using GLE simulations we also investigate and highlight the role of memory in nanoparticle dynamics and transport.
Derraugh, Lesley S; Neath, Ian; Surprenant, Aimée M; Beaudry, Olivia; Saint-Aubin, Jean
2017-03-01
The word-length effect, the finding that lists of short words are better recalled than lists of long words, is 1 of the 4 benchmark phenomena that guided development of the phonological loop component of working memory. However, previous work has noted a confound in word-length studies: The short words used had more orthographic neighbors (valid words that can be made by changing a single letter in the target word) than long words. The confound is that words with more neighbors are better recalled than otherwise comparable words with fewer neighbors. Two experiments are reported that address criticisms of the neighborhood-size account of the word-length effect by (1) testing 2 new stimulus sets, (2) using open rather than closed pools of words, and (3) using stimuli from a language other than English. In both experiments, words from large neighborhoods were better recalled than words from small neighborhoods. The results add to the growing number of studies demonstrating the substantial contribution of long-term memory to what have traditionally been identified as working memory tasks. The data are more easily explained by models incorporating the concept of redintegration rather than by frameworks such as the phonological loop that posit decay offset by rehearsal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less
Limits of the memory coefficient in measuring correlated bursts
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Hiraoka, Takayuki
2018-03-01
Temporal inhomogeneities in event sequences of natural and social phenomena have been characterized in terms of interevent times and correlations between interevent times. The inhomogeneities of interevent times have been extensively studied, while the correlations between interevent times, often called correlated bursts, are far from being fully understood. For measuring the correlated bursts, two relevant approaches were suggested, i.e., memory coefficient and burst size distribution. Here a burst size denotes the number of events in a bursty train detected for a given time window. Empirical analyses have revealed that the larger memory coefficient tends to be associated with the heavier tail of the burst size distribution. In particular, empirical findings in human activities appear inconsistent, such that the memory coefficient is close to 0, while burst size distributions follow a power law. In order to comprehend these observations, by assuming the conditional independence between consecutive interevent times, we derive the analytical form of the memory coefficient as a function of parameters describing interevent time and burst size distributions. Our analytical result can explain the general tendency of the larger memory coefficient being associated with the heavier tail of burst size distribution. We also find that the apparently inconsistent observations in human activities are compatible with each other, indicating that the memory coefficient has limits to measure the correlated bursts.
Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.
Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias
2011-01-01
The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.
Immediate Judgments of Learning are Insensitive to Implicit Interference Effects at Retrieval
Eakin, Deborah K.; Hertzog, Christopher
2013-01-01
We conducted three experiments to determine whether metamemory predictions at encoding, immediate judgments of learning (IJOLs) are sensitive to implicit interference effects that will occur at retrieval. Implicit interference was manipulated by varying the association set size of the cue (Exps. 1 & 2) or the target (Exp. 3). The typical finding is that memory is worse for large-set-size cues and targets, but only when the target is studied alone and later prompted with a related cue (extralist). When the pairs are studied together (intralist), recall is the same regardless of set size; set-size effects are eliminated. Metamemory predictions at retrieval, such as delayed JOLs (DJOLs) and feeling of knowing (FOK) judgments accurately reflect implicit interference effects (e.g., Eakin & Hertzog, 2006). In Experiment 1, we contrasted cue-set-size effects on IJOLs, DJOLs, and FOKs. After wrangling with an interesting methodological conundrum related to set size effects (Exp. 2), we found that whereas DJOLs and FOKs accurately predicted set size effects on retrieval, a comparison between IJOLs and no-cue IJOLs demonstrated that immediate judgments did not vary with set size. In Experiment 3, we confirmed this finding by manipulating target set size. Again, IJOLs did not vary with set size whereas DJOLs and FOKs did. The findings provide further evidence for the inferential view regarding the source of metamemory predictions, as well as indicate that inferences are based on different sources depending on when in the memory process predictions are made. PMID:21915761
NASA Technical Reports Server (NTRS)
Recksiedler, A. L.; Lutes, C. L.
1972-01-01
The oligatomic (mirror) thin film memory technology is a suitable candidate for general purpose spaceborne applications in the post-1975 time frame. Capacities of around 10 to the 8th power bits can be reliably implemented with systems designed around a 335 million bit module. The recommended mode was determined following an investigation of implementation sizes ranging from an 8,000,000 to 100,000,000 bits per module. Cost, power, weight, volume, reliability, maintainability and speed were investigated. The memory includes random access, NDRO, SEC-DED, nonvolatility, and dual interface characteristics. The applications most suitable for the technology are those involving a large capacity with high speed (no latency), nonvolatility, and random accessing.
Lee, Lian N; Bolinger, Beatrice; Banki, Zoltan; de Lara, Catherine; Highton, Andrew J; Colston, Julia M; Hutchings, Claire; Klenerman, Paul
2017-12-01
The efficacies of many new T cell vaccines rely on generating large populations of long-lived pathogen-specific effector memory CD8 T cells. However, it is now increasingly recognized that prior infection history impacts on the host immune response. Additionally, the order in which these infections are acquired could have a major effect. Exploiting the ability to generate large sustained effector memory (i.e. inflationary) T cell populations from murine cytomegalovirus (MCMV) and human Adenovirus-subtype (AdHu5) 5-beta-galactosidase (Ad-lacZ) vector, the impact of new infections on pre-existing memory and the capacity of the host's memory compartment to accommodate multiple inflationary populations from unrelated pathogens was investigated in a murine model. Simultaneous and sequential infections, first with MCMV followed by Ad-lacZ, generated inflationary populations towards both viruses with similar kinetics and magnitude to mono-infected groups. However, in Ad-lacZ immune mice, subsequent acute MCMV infection led to a rapid decline of the pre-existing Ad-LacZ-specific inflating population, associated with bystander activation of Fas-dependent apoptotic pathways. However, responses were maintained long-term and boosting with Ad-lacZ led to rapid re-expansion of the inflating population. These data indicate firstly that multiple specificities of inflating memory cells can be acquired at different times and stably co-exist. Some acute infections may also deplete pre-existing memory populations, thus revealing the importance of the order of infection acquisition. Importantly, immunization with an AdHu5 vector did not alter the size of the pre-existing memory. These phenomena are relevant to the development of adenoviral vectors as novel vaccination strategies for diverse infections and cancers. (241 words).
Bolinger, Beatrice; de Lara, Catherine; Hutchings, Claire
2017-01-01
The efficacies of many new T cell vaccines rely on generating large populations of long-lived pathogen-specific effector memory CD8 T cells. However, it is now increasingly recognized that prior infection history impacts on the host immune response. Additionally, the order in which these infections are acquired could have a major effect. Exploiting the ability to generate large sustained effector memory (i.e. inflationary) T cell populations from murine cytomegalovirus (MCMV) and human Adenovirus-subtype (AdHu5) 5-beta-galactosidase (Ad-lacZ) vector, the impact of new infections on pre-existing memory and the capacity of the host’s memory compartment to accommodate multiple inflationary populations from unrelated pathogens was investigated in a murine model. Simultaneous and sequential infections, first with MCMV followed by Ad-lacZ, generated inflationary populations towards both viruses with similar kinetics and magnitude to mono-infected groups. However, in Ad-lacZ immune mice, subsequent acute MCMV infection led to a rapid decline of the pre-existing Ad-LacZ-specific inflating population, associated with bystander activation of Fas-dependent apoptotic pathways. However, responses were maintained long-term and boosting with Ad-lacZ led to rapid re-expansion of the inflating population. These data indicate firstly that multiple specificities of inflating memory cells can be acquired at different times and stably co-exist. Some acute infections may also deplete pre-existing memory populations, thus revealing the importance of the order of infection acquisition. Importantly, immunization with an AdHu5 vector did not alter the size of the pre-existing memory. These phenomena are relevant to the development of adenoviral vectors as novel vaccination strategies for diverse infections and cancers. (241 words) PMID:29281733
Braun, Mischa; Weinrich, Christiane; Finke, Carsten; Ostendorf, Florian; Lehmann, Thomas-Nicolas; Ploner, Christoph J
2011-03-01
Converging evidence from behavioral and imaging studies suggests that within the human medial temporal lobe (MTL) the hippocampal formation may be particularly involved in recognition memory of associative information. However, it is unclear whether the hippocampal formation processes all types of associations or whether there is a specialization for processing of associations involving spatial information. Here, we investigated this issue in six patients with postsurgical lesions of the right MTL affecting the hippocampal formation and in ten healthy controls. Subjects performed a battery of delayed match-to-sample tasks with two delays (900/5,000 ms) and three set sizes. Subjects were requested to remember either single features (colors, locations, shapes, letters) or feature associations (color-location, color-shape, color-letter). In the single-feature conditions, performance of patients did not differ from controls. In the association conditions, a significant delay-dependent deficit in memory of color-location associations was found. This deficit was largely independent of set size. By contrast, performance in the color-shape and color-letter conditions was normal. These findings support the hypothesis that a region within the right MTL, presumably the hippocampal formation, does not equally support all kinds of visual memory but rather has a bias for processing of associations involving spatial information. Recruitment of this region during memory tasks appears to depend both on processing type (associative/nonassociative) and to-be-remembered material (spatial/nonspatial). Copyright © 2010 Wiley-Liss, Inc.
Working memory training in older adults: Bayesian evidence supporting the absence of transfer.
Guye, Sabrina; von Bastian, Claudia C
2017-12-01
The question of whether working memory training leads to generalized improvements in untrained cognitive abilities is a longstanding and heatedly debated one. Previous research provides mostly ambiguous evidence regarding the presence or absence of transfer effects in older adults. Thus, to draw decisive conclusions regarding the effectiveness of working memory training interventions, methodologically sound studies with larger sample sizes are needed. In this study, we investigated whether or not a computer-based working memory training intervention induced near and far transfer in a large sample of 142 healthy older adults (65 to 80 years). Therefore, we randomly assigned participants to either the experimental group, which completed 25 sessions of adaptive, process-based working memory training, or to the active, adaptive visual search control group. Bayesian linear mixed-effects models were used to estimate performance improvements on the level of abilities, using multiple indicator tasks for near (working memory) and far transfer (fluid intelligence, shifting, and inhibition). Our data provided consistent evidence supporting the absence of near transfer to untrained working memory tasks and the absence of far transfer effects to all of the assessed abilities. Our results suggest that working memory training is not an effective way to improve general cognitive functioning in old age. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
Gomar, Jesus J; Conejero-Goldberg, Concepcion; Huey, Edward D; Davies, Peter; Goldberg, Terry E
2016-03-01
Compromises in compensatory neurobiologic mechanisms due to aging and/or genetic factors (i.e., APOE gene) may influence brain-derived neurotrophic factor (BDNF) val66met polymorphism effects on temporal lobe morphometry and memory performance. We studied 2 cohorts from Alzheimer's Disease Neuroimaging Initiative: 175 healthy subjects and 222 with prodromal and established Alzheimer's disease. Yearly structural magnetic resonance imaging and cognitive performance assessments were carried out over 3 years of follow-up. Both cohorts had similar BDNF Val/Val and Met allele carriers' (including both Val/Met and Met/Met individuals) distribution. In healthy subjects, a significant trend for thinner posterior cingulate and precuneus cortices was detected in Met carriers compared to Val homozygotes in APOE E4 carriers, with large and medium effect sizes, respectively. The mild cognitive impairment/Alzheimer's disease cohort showed a longitudinal decline in entorhinal thickness in BDNF Met carriers compared to Val/Val in APOE E4 carriers, with effect sizes ranging from medium to large. In addition, an effect of BDNF genotype was found in APOE E4 carriers for episodic memory (logical memory and ADAS-Cog) and semantic fluency measures, with Met carriers performing worse in all cases. These findings suggest a lack of compensatory mechanisms in BDNF Met carriers and APOE E4 carriers in healthy and pathological aging. Copyright © 2016 Elsevier Inc. All rights reserved.
A malicious pattern detection engine for embedded security systems in the Internet of Things.
Oh, Doohwan; Kim, Deokho; Ro, Won Woo
2014-12-16
With the emergence of the Internet of Things (IoT), a large number of physical objects in daily life have been aggressively connected to the Internet. As the number of objects connected to networks increases, the security systems face a critical challenge due to the global connectivity and accessibility of the IoT. However, it is difficult to adapt traditional security systems to the objects in the IoT, because of their limited computing power and memory size. In light of this, we present a lightweight security system that uses a novel malicious pattern-matching engine. We limit the memory usage of the proposed system in order to make it work on resource-constrained devices. To mitigate performance degradation due to limitations of computation power and memory, we propose two novel techniques, auxiliary shifting and early decision. Through both techniques, we can efficiently reduce the number of matching operations on resource-constrained systems. Experiments and performance analyses show that our proposed system achieves a maximum speedup of 2.14 with an IoT object and provides scalable performance for a large number of patterns.
Fame emerges as a result of small memory
NASA Astrophysics Data System (ADS)
Bingol, Haluk
2008-03-01
A dynamic memory model is proposed in which an agent “learns” a new agent by means of recommendation. The agents can also “remember” and “forget.” The memory size is decreased while the population size is kept constant. “Fame” emerged as a few agents become very well known in expense of the majority being completely forgotten. The minimum and the maximum of fame change linearly with the relative memory size. The network properties of the who-knows-who graph, which represents the state of the system, are investigated.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Self-assembled phase-change nanowire for nonvolatile electronic memory
NASA Astrophysics Data System (ADS)
Jung, Yeonwoong
One of the most important subjects in nanosciences is to identify and exploit the relationship between size and structural/physical properties of materials and to explore novel material properties at a small-length scale. Scale-down of materials is not only advantageous in realizing miniaturized devices but nanometer-sized materials often exhibit intriguing physical/chemical properties that greatly differ from their bulk counterparts. This dissertation studies self-assembled phase-change nanowires for future nonvolatile electronic memories, mainly focusing on their size-dependent memory switching properties. Owing to the one-dimensional, unique geometry coupled with the small and tunable sizes, bottom-designed nanowires offer great opportunities in terms for both fundamental science and practical engineering perspectives, which would be difficult to realize in conventional top-down based approaches. We synthesized chalcogenide phase-change nanowires of different compositions and sizes, and studied their electronic memory switching owing to the structural change between crystalline and amorphous phases. In particular, we investigated nanowire size-dependent memory switching parameters, including writing current, power consumption, and data retention times, as well as studying composition-dependent electronic properties. The observed size and composition-dependent switching and recrystallization kinetics are explained based on the heat transport model and heterogeneous nucleation theories, which help to design phase-change materials with better properties. Moreover, we configured unconventional heterostructured phase-change nanowire memories and studied their multiple memory states in single nanowire devices. Finally, by combining in-situ/ex-situ electron microscopy techniques and electrical measurements, we characterized the structural states involved in electrically-driven phase-change in order to understand the atomistic mechanism that governs the electronic memory switching through phase-change.
Neural activity in the hippocampus predicts individual visual short-term memory capacity.
von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter
2013-07-01
Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period = 900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity = 3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity = 5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.
Recognition-induced forgetting is not due to category-based set size.
Maxcey, Ashleigh M
2016-01-01
What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.
Parvalbumin interneurons constrain the size of the lateral amygdala engram.
Morrison, Dano J; Rashid, Asim J; Yiu, Adelaide P; Yan, Chen; Frankland, Paul W; Josselyn, Sheena A
2016-11-01
Memories are thought to be represented by discrete physiological changes in the brain, collectively referred to as an engram, that allow patterns of activity present during learning to be reactivated in the future. During the formation of a conditioned fear memory, a subset of principal (excitatory) neurons in the lateral amygdala (LA) are allocated to a neuronal ensemble that encodes an association between an initially neutral stimulus and a threatening aversive stimulus. Previous experimental and computational work suggests that this subset consists of only a small proportion of all LA neurons, and that this proportion remains constant across different memories. Here we examine the mechanisms that contribute to the stability of the size of the LA component of an engram supporting a fear memory. Visualizing expression of the activity-dependent gene Arc following memory retrieval to identify neurons allocated to an engram, we first show that the overall size of the LA engram remains constant across conditions of different memory strength. That is, the strength of a memory was not correlated with the number of LA neurons allocated to the engram supporting that memory. We then examine potential mechanisms constraining the size of the LA engram by expressing inhibitory DREADDS (designer receptors exclusively activated by designer drugs) in parvalbumin-positive (PV + ) interneurons of the amygdala. We find that silencing PV + neurons during conditioning increases the size of the engram, especially in the dorsal subnucleus of the LA. These results confirm predictions from modeling studies regarding the role of inhibition in shaping the size of neuronal memory ensembles and provide additional support for the idea that neurons in the LA are sparsely allocated to the engram based on relative neuronal excitability. Copyright © 2016 Elsevier Inc. All rights reserved.
When does length cause the word length effect?
Jalbert, Annie; Neath, Ian; Bireta, Tamra J; Surprenant, Aimée M
2011-03-01
The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining evidence for time-based decay. However, previous studies investigating this effect have confounded length with orthographic neighborhood size. In the present study, Experiments 1A and 1B revealed typical effects of length when short and long words were equated on all relevant dimensions previously identified in the literature except for neighborhood size. In Experiment 2, consonant-vowel-consonant (CVC) words with a large orthographic neighborhood were better recalled than were CVC words with a small orthographic neighborhood. In Experiments 3 and 4, using two different sets of stimuli, we showed that when short (1-syllable) and long (3-syllable) items were equated for neighborhood size, the word length effect disappeared. Experiment 5 replicated this with spoken recall. We suggest that the word length effect may be better explained by the differences in linguistic and lexical properties of short and long words rather than by length per se. These results add to the growing literature showing problems for theories of memory that include decay offset by rehearsal as a central feature. 2011 APA, all rights reserved
Kofler, Michael J; Alderson, R Matt; Raiker, Joseph S; Bolden, Jennifer; Sarver, Dustin E; Rapport, Mark D
2014-05-01
The current study examined competing predictions of the default mode, cognitive neuroenergetic, and functional working memory models of attention-deficit/hyperactivity disorder (ADHD) regarding the relation between neurocognitive impairments in working memory and intraindividual variability. Twenty-two children with ADHD and 15 typically developing children were assessed on multiple tasks measuring intraindividual reaction time (RT) variability (ex-Gaussian: tau, sigma) and central executive (CE) working memory. Latent factor scores based on multiple, counterbalanced tasks were created for each construct of interest (CE, tau, sigma) to reflect reliable variance associated with each construct and remove task-specific, test-retest, and random error. Bias-corrected, bootstrapped mediation analyses revealed that CE working memory accounted for 88% to 100% of ADHD-related RT variability across models, and between-group differences in RT variability were no longer detectable after accounting for the mediating role of CE working memory. In contrast, RT variability accounted for 10% to 29% of between-group differences in CE working memory, and large magnitude CE working memory deficits remained after accounting for this partial mediation. Statistical comparison of effect size estimates across models suggests directionality of effects, such that the mediation effects of CE working memory on RT variability were significantly greater than the mediation effects of RT variability on CE working memory. The current findings question the role of RT variability as a primary neurocognitive indicator in ADHD and suggest that ADHD-related RT variability may be secondary to underlying deficits in CE working memory.
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanford, M.
1997-12-31
Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less
Elastocaloric cooling of additive manufactured shape memory alloys with large latent heat
Hou, Huilong; Simsek, Emrah; Stasak, Drew; ...
2017-08-11
The stress-induced martensitic phase transformation of shape memory alloys (SMAs) is the basis for elastocaloric cooling. In this paper, we employ additive manufacturing to fabricate TiNi SMAs, and demonstrate compressive elastocaloric cooling in the TiNi rods with transformation latent heat as large as 20 J g -1. Adiabatic compression on as-fabricated TiNi displays cooling ΔT as high as -7.5 °C with recoverable superelastic strain up to 5%. Unlike conventional SMAs, additive manufactured TiNi SMAs exhibit linear superelasticity with narrow hysteresis in stress–strain curves under both adiabatic and isothermal conditions. Microstructurally, we find that there are Ti 2Ni precipitates typically onemore » micron in size with a large aspect ratio enclosing the TiNi matrix. Finally, a stress transfer mechanism between reversible phase transformation in the TiNi matrix and mechanical deformation in Ti 2Ni precipitates is believed to be the origin of the unique superelasticity behavior.« less
Gervais, Roger O; Ben-Porath, Yossef S; Wygant, Dustin B; Green, Paul
2008-12-01
The MMPI-2 Response Bias Scale (RBS) is designed to detect response bias in forensic neuropsychological and disability assessment settings. Validation studies have demonstrated that the scale is sensitive to cognitive response bias as determined by failure on the Word Memory Test (WMT) and other symptom validity tests. Exaggerated memory complaints are a common feature of cognitive response bias. The present study was undertaken to determine the extent to which the RBS is sensitive to memory complaints and how it compares in this regard to other MMPI-2 validity scales and indices. This archival study used MMPI-2 and Memory Complaints Inventory (MCI) data from 1550 consecutive non-head-injury disability-related referrals to the first author's private practice. ANOVA results indicated significant increases in memory complaints across increasing RBS score ranges with large effect sizes. Regression analyses indicated that the RBS was a better predictor of the mean memory complaints score than the F, F(B), and F(P) validity scales and the FBS. There was no correlation between the RBS and the CVLT, an objective measure of verbal memory. These findings suggest that elevated scores on the RBS are associated with over-reporting of memory problems, which provides further external validation of the RBS as a sensitive measure of cognitive response bias. Interpretive guidelines for the RBS are provided.
Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.
Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin
2015-01-01
Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.
Lustig, Cindy; Flegal, Kristin E.
2009-01-01
Cognitive training programs for older adults often result in improvements at the group level. However, there are typically large age and individual differences in the size of training benefits. These differences may be related to the degree to which participants implement the processes targeted by the training program. To test this possibility, we tested older adults in a memory-training procedure either under specific strategy instructions designed to encourage semantic, integrative encoding, or in a condition that encouraged time and attention to encoding but allowed participants to choose their own strategy. Both conditions improved the performance of old-old adults relative to an earlier study (Bissig & Lustig, 2007) and reduced self-reports of everyday memory errors. Performance in the strategy-instruction group was related to pre-existing ability, performance in the strategy-choice group was not. The strategy-choice group performed better on a laboratory transfer test of recognition memory, and training performance was correlated with reduced everyday memory errors. Training programs that target latent but inefficiently-used abilities while allowing flexibility in bringing those abilities to bear may best promote effective training and transfer. PMID:19140647
Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less
Hammer, Rubi; Tennekoon, Michael; Cooke, Gillian E; Gayda, Jessica; Stein, Mark A; Booth, James R
2015-08-01
We tested the interactive effect of feedback and reward on visuospatial working memory in children with ADHD. Seventeen boys with ADHD and 17 Normal Control (NC) boys underwent functional magnetic resonance imaging (fMRI) while performing four visuospatial 2-back tasks that required monitoring the spatial location of letters presented on a display. Tasks varied in reward size (large; small) and feedback availability (no-feedback; feedback). While the performance of NC boys was high in all conditions, boys with ADHD exhibited higher performance (similar to those of NC boys) only when they received feedback associated with large-reward. Performance pattern in both groups was mirrored by neural activity in an executive function neural network comprised of few distinct frontal brain regions. Specifically, neural activity in the left and right middle frontal gyri of boys with ADHD became normal-like only when feedback was available, mainly when feedback was associated with large-reward. When feedback was associated with small-reward, or when large-reward was expected but feedback was not available, boys with ADHD exhibited altered neural activity in the medial orbitofrontal cortex and anterior insula. This suggests that contextual support normalizes activity in executive brain regions in children with ADHD, which results in improved working memory. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Design of a Telescopic Linear Actuator Based on Hollow Shape Memory Springs
NASA Astrophysics Data System (ADS)
Spaggiari, Andrea; Spinella, Igor; Dragoni, Eugenio
2011-07-01
Shape memory alloys (SMAs) are smart materials exploited in many applications to build actuators with high power to mass ratio. Typical SMA drawbacks are: wires show poor stroke and excessive length, helical springs have limited mechanical bandwidth and high power consumption. This study is focused on the design of a large-scale linear SMA actuator conceived to maximize the stroke while limiting the overall size and the electric consumption. This result is achieved by adopting for the actuator a telescopic multi-stage architecture and using SMA helical springs with hollow cross section to power the stages. The hollow geometry leads to reduced axial size and mass of the actuator and to enhanced working frequency while the telescopic design confers to the actuator an indexable motion, with a number of different displacements being achieved through simple on-off control strategies. An analytical thermo-electro-mechanical model is developed to optimize the device. Output stroke and force are maximized while total size and power consumption are simultaneously minimized. Finally, the optimized actuator, showing good performance from all these points of view, is designed in detail.
Limited capacity for contour curvature in iconic memory.
Sakai, Koji
2006-06-01
We measured the difference threshold for contour curvature in iconic memory by using the cued discrimination method. The study stimulus consisting of 2 to 6 curved contours was briefly presented in the fovea, followed by two lines as cues. Subjects discriminated the curvature of two cued curves. The cue delays were 0 msec. and 300 msec. in Exps. 1 and 2, respectively, and 50 msec. before the study offset in Exp. 3. Analysis of data from Exps. 1 and 2 showed that the Weber fraction rose monotonically with the increase in set size. Clear set-size effects indicate that iconic memory has a limited capacity. Moreover, clear set-size effect in Exp. 3 indicates that perception itself has a limited capacity. Larger set-size effects in Exp. 1 than in Exp. 3 suggest that iconic memory after perceptual process has limited capacity. These properties of iconic memory at threshold level are contradictory to the traditional view that iconic memory has a high capacity both at suprathreshold and categorical levels.
Single-pass memory system evaluation for multiprogramming workloads
NASA Technical Reports Server (NTRS)
Conte, Thomas M.; Hwu, Wen-Mei W.
1990-01-01
Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.
Evidence against decay in verbal working memory.
Oberauer, Klaus; Lewandowsky, Stephan
2013-05-01
The article tests the assumption that forgetting in working memory for verbal materials is caused by time-based decay, using the complex-span paradigm. Participants encoded 6 letters for serial recall; each letter was preceded and followed by a processing period comprising 4 trials of difficult visual search. Processing duration, during which memory could decay, was manipulated via search set size. This manipulation increased retention interval by up to 100% without having any effect on recall accuracy. This result held with and without articulatory suppression. Two experiments using a dual-task paradigm showed that the visual search process required central attention. Thus, even when memory maintenance by central attention and by articulatory rehearsal was prevented, a large delay had no effect on memory performance, contrary to the decay notion. Most previous experiments that manipulated the retention interval and the opportunity for maintenance processes in complex span have confounded these variables with time pressure during processing periods. Three further experiments identified time pressure as the variable that affected recall. We conclude that time-based decay does not contribute to the capacity limit of verbal working memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Cognitive Impairment in Euthymic Pediatric Bipolar Disorder: A Systematic Review and Meta-Analysis.
Elias, Liana R; Miskowiak, Kamilla W; Vale, Antônio M O; Köhler, Cristiano A; Kjærstad, Hanne L; Stubbs, Brendon; Kessing, Lars V; Vieta, Eduard; Maes, Michael; Goldstein, Benjamin I; Carvalho, André F
2017-04-01
To perform a systematic review and meta-analysis of studies investigating neurocognition in euthymic youths with bipolar disorder (BD) compared to healthy controls (HCs). A systematic literature search was conducted in the PubMed/MEDLINE, PsycINFO, and EMBASE databases from inception up until March 23, 2016, for original peer-reviewed articles that investigated neurocognition in euthymic youths with BD compared to HCs. Effect sizes (ES) for individual tests were extracted. In addition, results were grouped according to cognitive domain. This review complied with the PRISMA statement guidelines. A total of 24 studies met inclusion criteria (N = 1,146; 510 with BD). Overall, euthymic youths with BD were significantly impaired in verbal learning, verbal memory, working memory, visual learning, and visual memory, with moderate to large ESs (Hedge's g 0.76-0.99); significant impairments were not observed for attention/vigilance, reasoning and problem solving, and/or processing speed. Heterogeneity was moderate to large (I 2 ≥ 50%) for most ES estimates. Differences in the definition of euthymia across studies explained the heterogeneity in the ES estimate for verbal learning and memory. We also found evidence for other potential sources of heterogeneity in several ES estimates including co-occurring attention-deficit/hyperactivity disorder (ADHD) and anxiety disorders, and the use of medications. In addition, the use of different neuropsychological tests appeared to contribute to heterogeneity of some estimates (e.g., attention/vigilance domain). Euthymic youths with BD exhibit significant cognitive dysfunction encompassing verbal learning and memory, working memory, and/or visual learning and memory domains. These data indicate that for a subset of individuals with BD, neurodevelopmental factors may contribute to cognitive dysfunction. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Dlugach, Jana M.; Mishchenko, Michael I.; Mackowski, Daniel W.
2012-01-01
Using the results of direct, numerically exact computer solutions of the Maxwell equations, we analyze scattering and absorption characteristics of polydisperse compound particles in the form of wavelength-sized spheres covered with a large number of much smaller spherical grains.The results pertain to the complex refractive indices1.55 + i0.0003,1.55 + i0.3, and 3 + i0.1. We show that the optical effects of dusting wavelength-sized hosts by microscopic grains can vary depending on the number and size of the grains as well as on the complex refractive index. Our computations also demonstrate the high efficiency of the new superposition T-matrix code developed for use on distributed memory computer clusters.
NASA Technical Reports Server (NTRS)
Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.
2003-01-01
This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.
NASA Astrophysics Data System (ADS)
Varghani, Ali; Peiravi, Ali; Moradi, Farshad
2018-04-01
The perpendicular anisotropy Spin-Transfer Torque Random Access Memory (P-STT-RAM) is considered to be a promising candidate for high-density memories. Many distinct advantages of Perpendicular Magnetic Tunnel Junction (P-MTJ) compared to the conventional in-plane MTJ (I-MTJ) such as lower switching current, circular cell shape that facilitates manufacturability in smaller technology nodes, large thermal stability, smaller cell size, and lower dipole field interaction between adjacent cells make it a promising candidate as a universal memory. However, for small MTJ cell sizes, the perpendicular technology requires new materials with high polarization and low damping factor as well as low resistance area product of a P-MTJ in order to avoid a high write voltage as technology is scaled down. A new graphene-based STT-RAM cell for 8 nm technology node that uses high perpendicular magnetic anisotropy cobalt/nickel (Co/Ni) multilayer as magnetic layers is proposed in this paper. The proposed junction benefits from enough Tunneling Magnetoresistance Ratio (TMR), low resistance area product, low write voltage, and low power consumption that make it suitable for 8 nm technology node.
Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin
2018-01-01
Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...
2018-01-05
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
Rambrain - a library for virtually extending physical memory
NASA Astrophysics Data System (ADS)
Imgrund, Maximilian; Arth, Alexander
2017-08-01
We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.
NASA Astrophysics Data System (ADS)
Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft
2018-01-01
We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.
Using a Cray Y-MP as an array processor for a RISC Workstation
NASA Technical Reports Server (NTRS)
Lamaster, Hugh; Rogallo, Sarah J.
1992-01-01
As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.
NASA Technical Reports Server (NTRS)
Stehle, Roy H.; Ogier, Richard G.
1993-01-01
Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.
Acquisition of Cognitive Skill.
1981-08-03
applications to perform the task. Such a production still requires that the phone number be held in working memory. It is possible to eliminate this...be held in working memory and will apply in a time independent of memory set size. However, there still may be some effect of set size in the...theoretical speculation; unpublished work in our laboratory on effects of practice on memory retrieval has confirmed this relationship. There is a
Ellenberg, Leah; Liu, Qi; Gioia, Gerard; Yasui, Yutaka; Packer, Roger J.; Mertens, Ann; Donaldson, Sarah S.; Stovall, Marilyn; Kadan-Lottick, Nina; Armstrong, Gregory; Robison, Leslie L.; Zeltzer, Lonnie K.
2009-01-01
Background Among survivors of childhood cancer, those with Central Nervous System (CNS) malignancies have been found to be at greatest risk for neuropsychological dysfunction in the first few years following diagnosis and treatment. This study follows survivors to adulthood to assess the long term impact of childhood CNS malignancy and its treatment on neurocognitive functioning. Participants & Methods As part of the Childhood Cancer Survivor Study (CCSS), 802 survivors of childhood CNS malignancy, 5937 survivors of non-CNS malignancy and 382 siblings without cancer completed a 25 item Neurocognitive Questionnaire (CCSS-NCQ) at least 16 years post cancer diagnosis assessing task efficiency, emotional regulation, organizational skills and memory. Neurocognitive functioning in survivors of CNS malignancy was compared to that of non-CNS malignancy survivors and a sibling cohort. Within the group of CNS malignancy survivors, multiple linear regression was used to assess the contribution of demographic, illness and treatment variables to reported neurocognitive functioning and the relationship of reported neurocognitive functioning to educational, employment and income status. Results Survivors of CNS malignancy reported significantly greater neurocognitive impairment on all factors assessed by the CCSS-NCQ than non-CNS cancer survivors or siblings (p<.01), with mean T scores of CNS malignancy survivors substantially more impaired that those of the sibling cohort (p<.001), with a large effect size for Task Efficiency (1.16) and a medium effect size for Memory (.68). Within the CNS malignancy group, medical complications, including hearing deficits, paralysis and cerebrovascular incidents resulted in a greater likelihood of reported deficits on all of the CCSS-NCQ factors, with generally small effect sizes (.22-.50). Total brain irradiation predicted greater impairment on Task Efficiency and Memory (Effect sizes: .65 and .63, respectively), as did partial brain irradiation, with smaller effect sizes (.49 and .43, respectively). Ventriculoperitoneal (VP) shunt placement was associated with small deficits on the same scales (Effect sizes: Task Efficiency .26, Memory .32). Female gender predicted a greater likelihood of impaired scores on 2 scales, with small effect sizes (Task Efficiency .38, Emotional Regulation .45), while diagnosis before age 2 years resulted in less likelihood of reported impairment on the Memory factor with a moderate effect size (.64). CNS malignancy survivors with more impaired CCSS-NCQ scores demonstrated significantly lower educational attainment (p<.01), less household income (p<.001) and less full time employment (p<.001). Conclusions Survivors of childhood CNS malignancy are at significant risk for impairment in neurocognitive functioning in adulthood, particularly if they have received cranial radiation, had a VP shunt placed, suffered a cerebrovascular incident or are left with hearing or motor impairments. Reported neurocognitive impairment adversely affected important adult outcomes, including education, employment, income and marital status. PMID:19899829
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics.
Khomtchouk, Bohdan B; Hennessy, James R; Wahlestedt, Claes
2017-01-01
Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105-107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.
Randomized Controlled Trial of Exercise for ADHD and Disruptive Behavior Disorders
Bustamante, Eduardo E.; Davis, Catherine L.; Frazier, Stacy L.; Rusch, Dana; Fogg, Louis F.; Atkins, Marc S.; Marquez, David X.
2016-01-01
Purpose To test feasibility and impact of a 10-week after-school exercise program for children with ADHD and/or disruptive behavior disorders (DBD) living in an urban poor community. Methods Children were randomized to exercise (n=19) or a comparable but sedentary attention control program (n=16). Cognitive and behavioral outcomes were collected pre-post. Intent-to-treat mixed models tested group × time and group × time × attendance interactions. Effect sizes were calculated within and between groups. Results Feasibility was evidenced by 86% retention, 60% attendance, and average 75% maximum heart rate. Group × time results were null on the primary outcome, parent-reported executive function. Among secondary outcomes, between-group effect sizes favored exercise on hyperactive symptoms (d=0.47) and verbal working memory (d=0.26), and controls on visuospatial working memory (d=-0.21) and oppositional defiant symptoms (d=-0.37). In each group, within-group effect sizes were moderate-large on most outcomes (d=0.67 to 1.60). A group × time × attendance interaction emerged on visuospatial working memory (F[1,33]=7.42, p<.05), such that attendance to the control program was related to greater improvements (r=.72, p<.01) while attendance to the exercise program was not (r=.25, p=.34). Conclusions While between-group findings on the primary outcome, parent-reported executive function, were null, between-group effect sizes on hyperactivity and visuospatial working memory may reflect adaptations to the specific challenges presented by distinct formats. Both groups demonstrated substantial within-group improvements on clinically relevant outcomes. Findings underscore the importance of programmatic features such as routines, engaging activities, behavior management strategies, and adult attention; and highlight the potential for after-school programs to benefit children with ADHD and DBD living in urban poverty where health needs are high and services resources few. PMID:26829000
Randomized Controlled Trial of Exercise for ADHD and Disruptive Behavior Disorders.
Bustamante, Eduardo Esteban; Davis, Catherine Lucy; Frazier, Stacy Lynn; Rusch, Dana; Fogg, Louis F; Atkins, Marc S; Marquez, David Xavier
2016-07-01
The objective of this study is to test the feasibility and impact of a 10-wk after-school exercise program for children with attention deficit hyperactivity disorder and/or disruptive behavior disorders living in an urban poor community. Children were randomized to an exercise program (n = 19) or a comparable but sedentary attention control program (n = 16). Cognitive and behavioral outcomes were collected pre-/posttest. Intent-to-treat mixed models tested group-time and group-time-attendance interactions. Effect sizes were calculated within and between groups. Feasibility was evidenced by 86% retention, 60% attendance, and average 75% maximum HR. Group-time results were null on the primary outcome, parent-reported executive function. Among secondary outcomes, between-group effect sizes favored exercise on hyperactive symptoms (d = 0.47) and verbal working memory (d = 0.26), and controls on visuospatial working memory (d = -0.21) and oppositional defiant symptoms (d = -0.37). In each group, within-group effect sizes were moderate to large on most outcomes (d = 0.67 to 1.60). A group-time-attendance interaction emerged on visuospatial working memory (F[1,33] = 7.42, P < 0.05), such that attendance to the control program was related to greater improvements (r = 0.72, P < 0.01), whereas attendance to the exercise program was not (r = 0.25, P = 0.34). Although between-group findings on the primary outcome, parent-reported executive function, were null, between-group effect sizes on hyperactivity and visuospatial working memory may reflect adaptations to the specific challenges presented by distinct formats. Both groups demonstrated substantial within-group improvements on clinically relevant outcomes. Findings underscore the importance of programmatic features, such as routines, engaging activities, behavior management strategies, and adult attention, and highlight the potential for after-school programs to benefit children with attention deficit hyperactivity disorder and disruptive behavior disorder living in urban poverty where health needs are high and services resources few.
Interacting with Nature Improves Cognition and Affect for Individuals with Depression
Berman, Marc G.; Kross, Ethan; Krpan, Katherine M.; Askren, Mary K.; Burson, Aleah; Deldin, Patricia J.; Kaplan, Stephen; Sherdell, Lindsey; Gotlib, Ian H.; Jonides, John
2012-01-01
Background This study aimed to explore whether walking in nature may be beneficial for individuals with major depressive disorder (MDD). Healthy adults demonstrate significant cognitive gains after nature walks, but it was unclear whether those same benefits would be achieved in a depressed sample as walking alone in nature might induce rumination, thereby worsening memory and mood. Methods Twenty individuals diagnosed with MDD participated in this study. At baseline, mood and short term memory span were assessed using the PANAS and the backwards digit span (BDS) task, respectively. Participants were then asked to think about an unresolved negative autobiographical event to prime rumination, prior to taking a 50 minute walk in either a natural or urban setting. After the walk, mood and short-term memory span were reassessed. The following week, participants returned to the lab and repeated the entire procedure, but walked in the location not visited in the first session (i.e., a counterbalanced within-subjects design). Results Participants exhibited significant increases in memory span after the nature walk relative to the urban walk, p < .001, ηp2= .53 (a large effect-size). Participants also showed increases in mood, but the mood effects did not correlate with the memory effects, suggesting separable mechanisms and replicating previous work. Limitations Sample size and participants’ motivation. Conclusions These findings extend earlier work demonstrating the cognitive and affective benefits of interacting with nature to individuals with MDD. Therefore, interacting with nature may be useful clinically as a supplement to existing treatments for MDD. PMID:22464936
Interacting with nature improves cognition and affect for individuals with depression.
Berman, Marc G; Kross, Ethan; Krpan, Katherine M; Askren, Mary K; Burson, Aleah; Deldin, Patricia J; Kaplan, Stephen; Sherdell, Lindsey; Gotlib, Ian H; Jonides, John
2012-11-01
This study aimed to explore whether walking in nature may be beneficial for individuals with major depressive disorder (MDD). Healthy adults demonstrate significant cognitive gains after nature walks, but it was unclear whether those same benefits would be achieved in a depressed sample as walking alone in nature might induce rumination, thereby worsening memory and mood. Twenty individuals diagnosed with MDD participated in this study. At baseline, mood and short term memory span were assessed using the PANAS and the backwards digit span (BDS) task, respectively. Participants were then asked to think about an unresolved negative autobiographical event to prime rumination, prior to taking a 50-min walk in either a natural or urban setting. After the walk, mood and short-term memory span were reassessed. The following week, participants returned to the lab and repeated the entire procedure, but walked in the location not visited in the first session (i.e., a counterbalanced within-subjects design). Participants exhibited significant increases in memory span after the nature walk relative to the urban walk, p<.001, η(p)(2)=.53 (a large effect-size). Participants also showed increases in mood, but the mood effects did not correlate with the memory effects, suggesting separable mechanisms and replicating previous work. Sample size and participants' motivation. These findings extend earlier work demonstrating the cognitive and affective benefits of interacting with nature to individuals with MDD. Therefore, interacting with nature may be useful clinically as a supplement to existing treatments for MDD. Copyright © 2012 Elsevier B.V. All rights reserved.
2007-01-01
Electro - optic properties of cholesteric liquid crystals with holographically patterned polymer stabilization were examined. It is hypothesized that...enhanced electro - optic properties of the final device. Prior to holographic patterning, polymer stabilization with large elastic memory was generated by way... electro - optic properties appear to stem from a single dimension domain size increase, which allows for a reduction in the LC/polymer interaction.
Performance analysis and kernel size study of the Lynx real-time operating system
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.
1993-01-01
This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.
Memory versus perception of body size in patients with anorexia nervosa and healthy controls.
Øverås, Maria; Kapstad, Hilde; Brunborg, Cathrine; Landrø, Nils Inge; Lask, Bryan
2014-03-01
The objective of this study was to compare body size estimation based on memory versus perception, in patients with anorexia nervosa (AN) and healthy controls, adjusting for possible confounders. Seventy-one women (AN: 37, controls: 35), aged 14-29 years, were assessed with a computerized body size estimation morphing program. Information was gathered on depression, anxiety, time since last meal, weight and height. Results showed that patients overestimated their body size significantly more than controls, both in the memory and perception condition. Further, patients overestimated their body size significantly more when estimation was based on perception than memory. When controlling for anxiety, the difference between patients and controls no longer reached significance. None of the other confounders contributed significantly to the model. The results suggest that anxiety plays a role in overestimation of body size in AN. This finding might inform treatment, suggesting that more focus should be aimed at the underlying anxiety. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.
Effects of long-term representations on free recall of unrelated words
Katkov, Mikhail; Romani, Sandro
2015-01-01
Human memory stores vast amounts of information. Yet recalling this information is often challenging when specific cues are lacking. Here we consider an associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts that different items stored in memory have different probability to be recalled depending on the size of their representation. Moreover, items with high recall probability tend to be recalled earlier and suppress other items. We performed an analysis of a large data set on free recall and found a highly specific pattern of statistical dependencies predicted by the model, in particular negative correlations between the number of words recalled and their average recall probability. Taken together, experimental and modeling results presented here reveal complex interactions between memory items during recall that severely constrain recall capacity. PMID:25593296
Narrow log-periodic modulations in non-Markovian random walks
NASA Astrophysics Data System (ADS)
Diniz, R. M. B.; Cressoni, J. C.; da Silva, M. A. A.; Mariz, A. M.; de Araújo, J. M.
2017-12-01
What are the necessary ingredients for log-periodicity to appear in the dynamics of a random walk model? Can they be subtle enough to be overlooked? Previous studies suggest that long-range damaged memory and negative feedback together are necessary conditions for the emergence of log-periodic oscillations. The role of negative feedback would then be crucial, forcing the system to change direction. In this paper we show that small-amplitude log-periodic oscillations can emerge when the system is driven by positive feedback. Due to their very small amplitude, these oscillations can easily be mistaken for numerical finite-size effects. The models we use consist of discrete-time random walks with strong memory correlations where the decision process is taken from memory profiles based either on a binomial distribution or on a delta distribution. Anomalous superdiffusive behavior and log-periodic modulations are shown to arise in the large time limit for convenient choices of the models parameters.
Activating representations in permanent memory: different benefits for pictures and words.
Seifert, L S
1997-09-01
Previous research has suggested that pictures have privileged access to semantic memory (W. R. Glaser, 1992), but J. Theios and P. C. Amrhein (1989b) argued that prior studies inappropriately used large pictures and small words. In Experiment 1, participants categorized pictures reliably faster than words, even when both types of items were of optimal perceptual size. In Experiment 2, a poststimulus flashmask and judgments about internal features did not eliminate picture superiority, indicating that it was not due to differences in early visual processing or analysis of visible features. In Experiment 3, when participants made judgments about whether items were related, latencies were reliably faster for categorically related pictures than for words, but there was no picture advantage for noncategorically associated items. Results indicate that pictures have privileged access to semantic memory for categories, but that neither pictures nor words seem to have privileged access to noncategorical associations.
The neural correlates of gist-based true and false recognition
Gutchess, Angela H.; Schacter, Daniel L.
2012-01-01
When information is thematically related to previously studied information, gist-based processes contribute to false recognition. Using functional MRI, we examined the neural correlates of gist-based recognition as a function of increasing numbers of studied exemplars. Sixteen participants incidentally encoded small, medium, and large sets of pictures, and we compared the neural response at recognition using parametric modulation analyses. For hits, regions in middle occipital, middle temporal, and posterior parietal cortex linearly modulated their activity according to the number of related encoded items. For false alarms, visual, parietal, and hippocampal regions were modulated as a function of the encoded set size. The present results are consistent with prior work in that the neural regions supporting veridical memory also contribute to false memory for related information. The results also reveal that these regions respond to the degree of relatedness among similar items, and implicate perceptual and constructive processes in gist-based false memory. PMID:22155331
[Method of file sorting for mini- and microcomputers].
Chau, N; Legras, B; Benamghar, L; Martin, J
1983-05-01
The authors describe a new sorting method of files which belongs to the class of direct-addressing sorting methods. It makes use of a variant of the classical technique of 'virtual memory'. It is particularly well suited to mini- and micro-computers which have a small core memory (32 K words, for example) and are fitted with a direct-access peripheral device, such as a disc unit. When the file to be sorted is medium-sized (some thousand records), the running of the program essentially occurs inside the core memory and consequently, the method becomes very fast. This is very important because most medical files handled in our laboratory are in this category. However, the method is also suitable for big computers and large files; its implementation is easy. It does not require any magnetic tape unit, and it seems to us to be one of the fastest methods available.
Electrically Variable Resistive Memory Devices
NASA Technical Reports Server (NTRS)
Liu, Shangqing; Wu, Nai-Juan; Ignatiev, Alex; Charlson, E. J.
2010-01-01
Nonvolatile electronic memory devices that store data in the form of electrical- resistance values, and memory circuits based on such devices, have been invented. These devices and circuits exploit an electrically-variable-resistance phenomenon that occurs in thin films of certain oxides that exhibit the colossal magnetoresistive (CMR) effect. It is worth emphasizing that, as stated in the immediately preceding article, these devices function at room temperature and do not depend on externally applied magnetic fields. A device of this type is basically a thin film resistor: it consists of a thin film of a CMR material located between, and in contact with, two electrical conductors. The application of a short-duration, low-voltage current pulse via the terminals changes the electrical resistance of the film. The amount of the change in resistance depends on the size of the pulse. The direction of change (increase or decrease of resistance) depends on the polarity of the pulse. Hence, a datum can be written (or a prior datum overwritten) in the memory device by applying a pulse of size and polarity tailored to set the resistance at a value that represents a specific numerical value. To read the datum, one applies a smaller pulse - one that is large enough to enable accurate measurement of resistance, but small enough so as not to change the resistance. In writing, the resistance can be set to any value within the dynamic range of the CMR film. Typically, the value would be one of several discrete resistance values that represent logic levels or digits. Because the number of levels can exceed 2, a memory device of this type is not limited to binary data. Like other memory devices, devices of this type can be incorporated into a memory integrated circuit by laying them out on a substrate in rows and columns, along with row and column conductors for electrically addressing them individually or collectively.
Streaming simplification of tetrahedral meshes.
Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T
2007-01-01
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.
Attar, Nada; Schneps, Matthew H; Pomplun, Marc
2016-10-01
An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.
Starc, Martina; Anticevic, Alan; Repovš, Grega
2017-05-01
Pupillometry provides an accessible option to track working memory processes with high temporal resolution. Several studies showed that pupil size increases with the number of items held in working memory; however, no study has explored whether pupil size also reflects the quality of working memory representations. To address this question, we used a spatial working memory task to investigate the relationship of pupil size with spatial precision of responses and indicators of reliance on generalized spatial categories. We asked 30 participants (15 female, aged 19-31) to remember the position of targets presented at various locations along a hidden radial grid. After a delay, participants indicated the remembered location with a high-precision joystick providing a parametric measure of trial-to-trial accuracy. We recorded participants' pupil dilations continuously during task performance. Results showed a significant relation between pupil dilation during preparation/early encoding and the precision of responses, possibly reflecting the attentional resources devoted to memory encoding. In contrast, pupil dilation at late maintenance and response predicted larger shifts of responses toward prototypical locations, possibly reflecting larger reliance on categorical representation. On an intraindividual level, smaller pupil dilations during encoding predicted larger dilations during late maintenance and response. On an interindividual level, participants relying more on categorical representation also produced larger precision errors. The results confirm the link between pupil size and the quality of spatial working memory representation. They suggest compensatory strategies of spatial working memory performance-loss of precise spatial representation likely increases reliance on generalized spatial categories. © 2017 Society for Psychophysiological Research.
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
NASA Astrophysics Data System (ADS)
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal
2008-07-01
UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.
NASA Astrophysics Data System (ADS)
Cook, H. M.; Bilsley, N. A.
2015-12-01
As the demand for introductory earth science classes rises at educational institutions, large class sizes place strain on the educator's time and ability to offer extensive project-based assignments. As a result, exams covering a broad spectrum of material are more heavily weighted in students' grades. Students often struggle on the first exam, as they attempt to retain a large amount of information from several different topics, while having no exposure to the type of questions that will be asked. This frequently leads to a large dropout rate early in the academic term, or at least a sense of discouragement and stress among struggling students. To better prepare students for a broad scope exam, a review activity modelled after the traditional Milton Bradley "Memory" game was developed to remind students of what would be covered on the exam, prepare them for the style of questions that may be asked, as well as provide a fun, interactive, and educational activity. The Earth Science Memory Game was developed to have interchangeable sets to cover a broad range of topics and thus also be reusable for the duration of the course. Example games sets presented include, but are not limited to, the scientific method, minerals, rocks, topographic maps, tectonics, geologic structures, volcanoes, and weather. The Earth Science Memory Game not only provides an effective review tool to improve success rates on broad scope exams, but is also customizable by the instructor, reusable, and easily constructed by common office supplies.
Master-equation approach to the study of phase-change processes in data storage media
NASA Astrophysics Data System (ADS)
Blyuss, K. B.; Ashwin, P.; Bassom, A. P.; Wright, C. D.
2005-07-01
We study the dynamics of crystallization in phase-change materials using a master-equation approach in which the state of the crystallizing material is described by a cluster size distribution function. A model is developed using the thermodynamics of the processes involved and representing the clusters of size two and greater as a continuum but clusters of size one (monomers) as a separate equation. We present some partial analytical results for the isothermal case and for large cluster sizes, but principally we use numerical simulations to investigate the model. We obtain results that are in good agreement with experimental data and the model appears to be useful for the fast simulation of reading and writing processes in phase-change optical and electrical memories.
Physical principles and current status of emerging non-volatile solid state memories
NASA Astrophysics Data System (ADS)
Wang, L.; Yang, C.-H.; Wen, J.
2015-07-01
Today the influence of non-volatile solid-state memories on persons' lives has become more prominent because of their non-volatility, low data latency, and high robustness. As a pioneering technology that is representative of non-volatile solidstate memories, flash memory has recently seen widespread application in many areas ranging from electronic appliances, such as cell phones and digital cameras, to external storage devices such as universal serial bus (USB) memory. Moreover, owing to its large storage capacity, it is expected that in the near future, flash memory will replace hard-disk drives as a dominant technology in the mass storage market, especially because of recently emerging solid-state drives. However, the rapid growth of the global digital data has led to the need for flash memories to have larger storage capacity, thus requiring a further downscaling of the cell size. Such a miniaturization is expected to be extremely difficult because of the well-known scaling limit of flash memories. It is therefore necessary to either explore innovative technologies that can extend the areal density of flash memories beyond the scaling limits, or to vigorously develop alternative non-volatile solid-state memories including ferroelectric random-access memory, magnetoresistive random-access memory, phase-change random-access memory, and resistive random-access memory. In this paper, we review the physical principles of flash memories and their technical challenges that affect our ability to enhance the storage capacity. We then present a detailed discussion of novel technologies that can extend the storage density of flash memories beyond the commonly accepted limits. In each case, we subsequently discuss the physical principles of these new types of non-volatile solid-state memories as well as their respective merits and weakness when utilized for data storage applications. Finally, we predict the future prospects for the aforementioned solid-state memories for the next generation of data-storage devices based on a comparison of their performance. [Figure not available: see fulltext.
A Malicious Pattern Detection Engine for Embedded Security Systems in the Internet of Things
Oh, Doohwan; Kim, Deokho; Ro, Won Woo
2014-01-01
With the emergence of the Internet of Things (IoT), a large number of physical objects in daily life have been aggressively connected to the Internet. As the number of objects connected to networks increases, the security systems face a critical challenge due to the global connectivity and accessibility of the IoT. However, it is difficult to adapt traditional security systems to the objects in the IoT, because of their limited computing power and memory size. In light of this, we present a lightweight security system that uses a novel malicious pattern-matching engine. We limit the memory usage of the proposed system in order to make it work on resource-constrained devices. To mitigate performance degradation due to limitations of computation power and memory, we propose two novel techniques, auxiliary shifting and early decision. Through both techniques, we can efficiently reduce the number of matching operations on resource-constrained systems. Experiments and performance analyses show that our proposed system achieves a maximum speedup of 2.14 with an IoT object and provides scalable performance for a large number of patterns. PMID:25521382
An abstraction layer for efficient memory management of tabulated chemistry and flamelet solutions
NASA Astrophysics Data System (ADS)
Weise, Steffen; Messig, Danny; Meyer, Bernd; Hasse, Christian
2013-06-01
A large number of methods for simulating reactive flows exist, some of them, for example, directly use detailed chemical kinetics or use precomputed and tabulated flame solutions. Both approaches couple the research fields computational fluid dynamics and chemistry tightly together using either an online or offline approach to solve the chemistry domain. The offline approach usually involves a method of generating databases or so-called Lookup-Tables (LUTs). As these LUTs are extended to not only contain material properties but interactions between chemistry and turbulent flow, the number of parameters and thus dimensions increases. Given a reasonable discretisation, file sizes can increase drastically. The main goal of this work is to provide methods that handle large database files efficiently. A Memory Abstraction Layer (MAL) has been developed that handles requested LUT entries efficiently by splitting the database file into several smaller blocks. It keeps the total memory usage at a minimum using thin allocation methods and compression to minimise filesystem operations. The MAL has been evaluated using three different test cases. The first rather generic one is a sequential reading operation on an LUT to evaluate the runtime behaviour as well as the memory consumption of the MAL. The second test case is a simulation of a non-premixed turbulent flame, the so-called HM1 flame, which is a well-known test case in the turbulent combustion community. The third test case is a simulation of a non-premixed laminar flame as described by McEnally in 1996 and Bennett in 2000. Using the previously developed solver 'flameletFoam' in conjunction with the MAL, memory consumption and the performance penalty introduced were studied. The total memory used while running a parallel simulation was reduced significantly while the CPU time overhead associated with the MAL remained low.
Bragdon, Laura B; Gibb, Brandon E; Coles, Meredith E
2018-06-19
Investigations of neuropsychological functioning in obsessive-compulsive disorder (OCD) have produced mixed results for deficits in executive functioning (EF), attention, and memory. One potential explanation for varied findings may relate to the heterogeneity of symptom presentations, and different clinical or neurobiological characteristics may underlie these different symptoms. We investigated differences in neuropsychological functioning between two symptoms groups, obsessing/checking (O/C) and symmetry/ordering (S/O), based on data suggesting an association with different motivations: harm avoidance and incompleteness, respectively. Ten studies (with 628 patients) were included and each investigation assessed at least one of 14 neuropsychological domains. The S/O domain demonstrated small, negative correlations with overall neuropsychological functioning, performance in EF, memory, visuospatial ability, cognitive flexibility, and verbal working memory. O/C symptoms demonstrated small, negative correlations with memory and verbal memory performance. A comparison of functioning between symptom groups identified large effect sizes showing that the S/O dimension was more strongly related to poorer neuropsychological performance overall, and in the domains of attention, visuospatial ability, and the subdomain of verbal working memory. Findings support existing evidence suggesting that different OCD symptoms, and their associated core motivations, relate to unique patterns of neuropsychological functioning, and, potentially dysfunction in different neural circuits. © 2018 Wiley Periodicals, Inc.
Feasibility of self-correcting quantum memory and thermal stability of topological order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Beni, E-mail: rouge@mit.edu
2011-10-15
Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less
Xiang, Lanyi; Wang, Wei; Xie, Wenfa
2016-01-01
Poly(vinylidene fluoride–trifluoroethylene) has been widely used as a dielectric of the ferroelectric organic field-effect transistor (FE-OFET) nonvolatile memory (NVM). Some critical issues, including low mobility and high operation voltage, existed in these FE-OFET NVMs, should be resolved before considering to their commercial application. In this paper, we demonstrated low-voltage operating FE-OFET NVMs based on a ferroelectric terpolymer poly(vinylidene-fluoride-trifluoroethylene-chlorotrifluoroethylene) [P(VDF-TrFE-CTFE)] owed to its low coercive field. By applying an ultraviolet-ozone (UVO) treatment to modify the surface of P(VDF-TrFE-CTFE) films, the growth model of the pentacene film was changed, which improved the pentacene grain size and the interface morphology of the pentacene/P(VDF-TrFE-CTFE). Thus, the mobility of the FE-OFET was significantly improved. As a result, a high performance FE-OFET NVM, with a high mobility of 0.8 cm2 V−1 s−1, large memory window of 15.4~19.2, good memory on/off ratio of 103, the reliable memory endurance over 100 cycles and stable memory retention ability, was achieved at a low operation voltage of ±15 V. PMID:27824101
Identifying Memory Allocation Patterns in HEP Software
NASA Astrophysics Data System (ADS)
Kama, S.; Rauschmayr, N.
2017-10-01
HEP applications perform an excessive amount of allocations/deallocations within short time intervals which results in memory churn, poor locality and performance degradation. These issues are already known for a decade, but due to the complexity of software frameworks and billions of allocations for a single job, up until recently no efficient mechanism has been available to correlate these issues with source code lines. However, with the advent of the Big Data era, many tools and platforms are now available to do large scale memory profiling. This paper presents, a prototype program developed to track and identify each single (de-)allocation. The CERN IT Hadoop cluster is used to compute memory key metrics, like locality, variation, lifetime and density of allocations. The prototype further provides a web based visualization back-end that allows the user to explore the results generated on the Hadoop cluster. Plotting these metrics for every single allocation over time gives a new insight into application’s memory handling. For instance, it shows which algorithms cause which kind of memory allocation patterns, which function flow causes how many short-lived objects, what are the most commonly allocated sizes etc. The paper will give an insight into the prototype and will show profiling examples for the LHC reconstruction, digitization and simulation jobs.
Low-power resistive random access memory by confining the formation of conducting filaments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yi-Jen; Lee, Si-Chen, E-mail: sclee@ntu.edu.tw; Shen, Tzu-Hsien
2016-06-15
Owing to their small physical size and low power consumption, resistive random access memory (RRAM) devices are potential for future memory and logic applications in microelectronics. In this study, a new resistive switching material structure, TiO{sub x}/silver nanoparticles/TiO{sub x}/AlTiO{sub x}, fabricated between the fluorine-doped tin oxide bottom electrode and the indium tin oxide top electrode is demonstrated. The device exhibits excellent memory performances, such as low operation voltage (<±1 V), low operation power, small variation in resistance, reliable data retention, and a large memory window. The current-voltage measurement shows that the conducting mechanism in the device at the high resistancemore » state is via electron hopping between oxygen vacancies in the resistive switching material. When the device is switched to the low resistance state, conducting filaments are formed in the resistive switching material as a result of accumulation of oxygen vacancies. The bottom AlTiO{sub x} layer in the device structure limits the formation of conducting filaments; therefore, the current and power consumption of device operation are significantly reduced.« less
Lustig, Cindy; Flegal, Kristin E
2008-12-01
Cognitive training programs for older adults often result in improvements at the group level. However, there are typically large age and individual differences in the size of training benefits. These differences may be related to the degree to which participants implement the processes targeted by the training program. To test this possibility, we tested older adults in a memory-training procedure either under specific strategy instructions designed to encourage semantic, integrative encoding, or in a condition that encouraged time and attention to encoding but allowed participants to choose their own strategy. Both conditions improved the performance of old-old adults relative to an earlier study (D. Bissig & C. Lustig, 2007) and reduced self-reports of everyday memory errors. Performance in the strategy-instruction group was related to preexisting ability; performance in the strategy?choice group was not. The strategy-choice group performed better on a laboratory transfer test of recognition memory, and training performance was correlated with reduced everyday memory errors. Training programs that target participants' latent but inefficiently used abilities while allowing flexibility in bringing those abilities to bear may best promote effective training and transfer. Copyright (c) 2009 APA, all rights reserved.
Keeping an eye on the truth? Pupil size changes associated with recognition memory.
Heaver, Becky; Hutton, Sam B
2011-05-01
During recognition memory tests participants' pupils dilate more when they view old items compared to novel items. We sought to replicate this "pupil old/new effect" and to determine its relationship to participants' responses. We compared changes in pupil size during recognition when participants were given standard recognition memory instructions, instructions to feign amnesia, and instructions to report all items as new. Participants' pupils dilated more to old items compared to new items under all three instruction conditions. This finding suggests that the increase in pupil size that occurs when participants encounter previously studied items is not under conscious control. Given that pupil size can be reliably and simply measured, the pupil old/new effect may have potential in clinical settings as a means for determining whether patients are feigning memory loss.
Expert system shell to reason on large amounts of data
NASA Technical Reports Server (NTRS)
Giuffrida, Gionanni
1994-01-01
The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.
NASA Astrophysics Data System (ADS)
Yadav, Manoj; Velampati, Ravi Shankar R.; Mandal, D.; Sharma, Rohit
2018-03-01
Colloidal synthesis and size control of nickel (Ni) nanocrystals (NCs) below 10 nm are reported using a microwave synthesis method. The synthesised colloidal NCs have been characterized using x-ray diffraction, transmission electron microscopy (TEM) and dynamic light scattering (DLS). XRD analysis highlights the face centred cubic crystal structure of synthesised NCs. The size of NCs observed using TEM and DLS have a distribution between 2.6 nm and 10 nm. Furthermore, atomic force microscopy analysis of spin-coated NCs over a silicon dioxide surface has been carried out to identify an optimum spin condition that can be used for the fabrication of a metal oxide semiconductor (MOS) non-volatile memory (NVM) capacitor. Subsequently, the fabrication of a MOS NVM capacitor is reported to demonstrate the potential application of colloidal synthesized Ni NCs in NVM devices. We also report the capacitance-voltage (C-V) and capacitance-time (C-t) response of the fabricated MOS NVM capacitor. The C-V and C-t characteristics depict a large flat band voltage shift (V FB) and high retention time, respectively, which indicate that colloidal Ni NCs are excellent candidates for applications in next-generation NVM devices.
Scalable PGAS Metadata Management on Extreme Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP
Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less
Pauls, Franz; Petermann, Franz; Lepach, Anja Christina
2015-01-01
At present, little is still known about the link between depression, memory and executive functioning. This study examined whether there are memory-related impairments in depressed patients and whether the size of such deficits depends on the age group and on specific types of cognitive measures. Memory performances of 215 clinically depressed patients were compared to the data of a matched control sample. Regression analyses were performed to determine the extent to which executive dysfunctions contributed to episodic memory impairments. When compared with healthy controls, significantly lower episodic memory and executive functioning performances were found for depressed patients of all age groups. Effect sizes appeared to vary across different memory and executive functioning measures. The extent to which executive dysfunctions could explain episodic memory impairments varied depending on the type of measure examined. These findings emphasise the need to consider memory-related functioning of depressed patients in the context of therapeutic treatments.
A depth-first search algorithm to compute elementary flux modes by linear programming
2014-01-01
Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068
Rheology of granular materials composed of crushable particles.
Nguyen, Duc-Hanh; Azéma, Émilien; Sornay, Philippe; Radjaï, Farhang
2018-04-11
We investigate sheared granular materials composed of crushable particles by means of contact dynamics simulations and the bonded-cell model for particle breakage. Each particle is paved by irregular cells interacting via cohesive forces. In each simulation, the ratio of the internal cohesion of particles to the confining pressure, the relative cohesion, is kept constant and the packing is subjected to biaxial shearing. The particles can break into two or more fragments when the internal cohesive forces are overcome by the action of compressive force chains between particles. The particle size distribution evolves during shear as the particles continue to break. We find that the breakage process is highly inhomogeneous both in the fragment sizes and their locations inside the packing. In particular, a number of large particles never break whereas a large number of particles are fully shattered. As a result, the packing keeps the memory of its initial particle size distribution, whereas a power-law distribution is observed for particles of intermediate size due to consecutive fragmentation events whereby the memory of the initial state is lost. Due to growing polydispersity, dense shear bands are formed inside the packings and the usual dilatant behavior is reduced or cancelled. Hence, the stress-strain curve no longer passes through a peak stress, and a progressive monotonic evolution towards a pseudo-steady state is observed instead. We find that the crushing rate is controlled by the confining pressure. We also show that the shear strength of the packing is well expressed in terms of contact anisotropies and force anisotropies. The force anisotropy increases while the contact orientation anisotropy declines for increasing internal cohesion of the particles. These two effects compensate each other so that the shear strength is nearly independent of the internal cohesion of particles.
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
2012-01-01
Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725
Decision-related factors in pupil old/new effects: Attention, response execution, and false memory.
Brocher, Andreas; Graf, Tim
2017-07-28
In this study, we investigate the effects of decision-related factors on recognition memory in pupil old/new paradigms. In Experiment 1, we used an old/new paradigm with words and pseudowords and participants made lexical decisions during recognition rather than old/new decisions. Importantly, participants were instructed to focus on the nonword-likeness of presented items, not their word-likeness. We obtained no old/new effects. In Experiment 2, participants discriminated old from new words and old from new pseudowords during recognition, and they did so as quickly as possible. We found old/new effects for both words and pseudowords. In Experiment 3, we used materials and an old/new design known to elicit a large number of incorrect responses. For false alarms ("old" response for new word), we found larger pupils than for correctly classified new items, starting at the point at which response execution was allowed (2750ms post stimulus onset). In contrast, pupil size for misses ("new" response for old word) was statistically indistinguishable from pupil size in correct rejections. Taken together, our data suggest that pupil old/new effects result more from the intentional use of memory than from its automatic use. Copyright © 2017 Elsevier Ltd. All rights reserved.
Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon
2005-11-01
A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals.
Kynast, Jana; Lampe, Leonie; Luck, Tobias; Frisch, Stefan; Arelin, Katrin; Hoffmann, Karl-Titus; Loeffler, Markus; Riedel-Heller, Steffi G; Villringer, Arno; Schroeter, Matthias L
2018-06-01
Age-related white matter hyperintensities (WMH) are a manifestation of white matter damage seen on magnetic resonance imaging (MRI). They are related to vascular risk factors and cognitive impairment. This study investigated the cognitive profile at different stages of WMH in a large community-dwelling sample; 849 subjects aged 21 to 79 years were classified on the 4-stage Fazekas scale according to hyperintense lesions seen on individual T2-weighted fluid-attenuated inversion recovery MRI scans. The evaluation of cognitive functioning included seven domains of cognitive performance and five domains of subjective impairment, as proposed by the DSM-5. For the first time, the impact of age-related WMH on Theory of Mind was investigated. Differences between Fazekas groups were analyzed non-parametrically and effect sizes were computed. Effect sizes revealed a slight overall cognitive decline in Fazekas groups 1 and 2 relative to healthy subjects. Fazekas group 3 presented substantial decline in social cognition, attention and memory, although characterized by a high inter-individual variability. WMH groups reported subjective cognitive decline. We demonstrate that extensive WMH are associated with specific impairment in attention, memory, social cognition, and subjective cognitive performance. The detailed neuropsychological characterization of WMH offers new therapeutic possibilities for those affected by vascular cognitive decline.
a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine
NASA Astrophysics Data System (ADS)
Dai, X.; Xiong, H.; Zheng, X.
2012-07-01
A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.
Memory-Augmented Cellular Automata for Image Analysis.
1978-11-01
case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)
Jung, Ji Hyung; Kim, Sunghwan; Kim, Hyeonjung; Park, Jongnam; Oh, Joon Hak
2015-10-07
Nano-floating gate memory (NFGM) devices are transistor-type memory devices that use nanostructured materials as charge trap sites. They have recently attracted a great deal of attention due to their excellent performance, capability for multilevel programming, and suitability as platforms for integrated circuits. Herein, novel NFGM devices have been fabricated using semiconducting cobalt ferrite (CoFe2O4) nanoparticles (NPs) as charge trap sites and pentacene as a p-type semiconductor. Monodisperse CoFe2O4 NPs with different diameters have been synthesized by thermal decomposition and embedded in NFGM devices. The particle size effects on the memory performance have been investigated in terms of energy levels and particle-particle interactions. CoFe2O4 NP-based memory devices exhibit a large memory window (≈73.84 V), a high read current on/off ratio (read I(on)/I(off)) of ≈2.98 × 10(3), and excellent data retention. Fast switching behaviors are observed due to the exceptional charge trapping/release capability of CoFe2O4 NPs surrounded by the oleate layer, which acts as an alternative tunneling dielectric layer and simplifies the device fabrication process. Furthermore, the NFGM devices show excellent thermal stability, and flexible memory devices fabricated on plastic substrates exhibit remarkable mechanical and electrical stability. This study demonstrates a viable means of fabricating highly flexible, high-performance organic memory devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Anomalous annealing of floating gate errors due to heavy ion irradiation
NASA Astrophysics Data System (ADS)
Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong
2018-03-01
Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.
Retest effects in working memory capacity tests: A meta-analysis.
Scharfen, Jana; Jansen, Katrin; Holling, Heinz
2018-06-15
The repeated administration of working memory capacity tests is common in clinical and research settings. For cognitive ability tests and different neuropsychological tests, meta-analyses have shown that they are prone to retest effects, which have to be accounted for when interpreting retest scores. Using a multilevel approach, this meta-analysis aims at showing the reproducibility of retest effects in working memory capacity tests for up to seven test administrations, and examines the impact of the length of the test-retest interval, test modality, equivalence of test forms and participant age on the size of retest effects. Furthermore, it is assessed whether the size of retest effects depends on the test paradigm. An extensive literature search revealed 234 effect sizes from 95 samples and 68 studies, in which healthy participants between 12 and 70 years repeatedly performed a working memory capacity test. Results yield a weighted average of g = 0.28 for retest effects from the first to the second test administration, and a significant increase in effect sizes was observed up to the fourth test administration. The length of the test-retest interval and publication year were found to moderate the size of retest effects. Retest effects differed between the paradigms of working memory capacity tests. These findings call for the development and use of appropriate experimental or statistical methods to address retest effects in working memory capacity tests.
General proactive interference and the N450 response.
Tays, William J; Dywan, Jane; Segalowitz, Sidney J
2009-10-25
Strategic repetition of verbal stimuli can effectively produce proactive interference (PI) effects in the Sternberg working memory task. Unique fronto-cortical activation to PI-eliciting letter probes has been interpreted as reflecting brain responses to PI. However, the use of only a small set of stimuli (e.g., letters and digits) requires constant repetition of stimuli in both PI and baseline trials, potentially creating a general PI effect in all conditions. We used event-related potentials to examine general PI effects by contrasting the interference-related frontal N450 response in two Sternberg tasks using a small versus large set size. We found that the N450 response differed significantly from baseline during the small set-size task only for response-conflict PI trials but not when PI was created solely from stimulus repetition. During the large set-size task N450 responses in both the familiarity-based and response-conflict PI conditions differed from baseline but not from each other. We conclude that the general stimulus repetition inherent in small set-size conditions can mask effects of familiarity-based PI and complicate the interpretation of any associated neural response.
Lawson, Chris A
2014-07-01
Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.
Wang, Weijie; Loke, Desmond; Shi, Luping; Zhao, Rong; Yang, Hongxin; Law, Leong-Tat; Ng, Lung-Tat; Lim, Kian-Guan; Yeo, Yee-Chia; Chong, Tow-Chong; Lacaita, Andrea L
2012-01-01
The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. Phase-change materials are highly promising in this respect. However, their contradictory speed and stability properties present a key challenge towards this ambition. We reveal that as the device size decreases, the phase-change mechanism changes from the material inherent crystallization mechanism (either nucleation- or growth-dominated), to the hetero-crystallization mechanism, which resulted in a significant increase in PCRAM speeds. Reducing the grain size can further increase the speed of phase-change. Such grain size effect on speed becomes increasingly significant at smaller device sizes. Together with the nano-thermal and electrical effects, fast phase-change, good stability and high endurance can be achieved. These findings lead to a feasible solution to achieve a universal memory.
Wang, Weijie; Loke, Desmond; Shi, Luping; Zhao, Rong; Yang, Hongxin; Law, Leong-Tat; Ng, Lung-Tat; Lim, Kian-Guan; Yeo, Yee-Chia; Chong, Tow-Chong; Lacaita, Andrea L.
2012-01-01
The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. Phase-change materials are highly promising in this respect. However, their contradictory speed and stability properties present a key challenge towards this ambition. We reveal that as the device size decreases, the phase-change mechanism changes from the material inherent crystallization mechanism (either nucleation- or growth-dominated), to the hetero-crystallization mechanism, which resulted in a significant increase in PCRAM speeds. Reducing the grain size can further increase the speed of phase-change. Such grain size effect on speed becomes increasingly significant at smaller device sizes. Together with the nano-thermal and electrical effects, fast phase-change, good stability and high endurance can be achieved. These findings lead to a feasible solution to achieve a universal memory. PMID:22496956
Intelligent systems in the context of surrounding environment.
Wakeling, J; Bak, P
2001-11-01
We investigate the behavioral patterns of a population of agents, each controlled by a simple biologically motivated neural network model, when they are set in competition against each other in the minority model of Challet and Zhang. We explore the effects of changing agent characteristics, demonstrating that crowding behavior takes place among agents of similar memory, and show how this allows unique "rogue" agents with higher memory values to take advantage of a majority population. We also show that agents' analytic capability is largely determined by the size of the intermediary layer of neurons. In the context of these results, we discuss the general nature of natural and artificial intelligence systems, and suggest intelligence only exists in the context of the surrounding environment (embodiment).
Does working memory load facilitate target detection?
Fruchtman-Steinbok, Tom; Kessler, Yoav
2016-02-01
Previous studies demonstrated that increasing working memory (WM) load delays performance of a concurrent task, by distracting attention and thus interfering with encoding and maintenance processes. The present study used a version of the change detection task with a target detection requirement during the retention interval. In contrast to the above prediction, target detection was faster following a larger set-size, specifically when presented shortly after the memory array (up to 400 ms). The effect of set-size on target detection was also evident when no memory retention was required. The set-size effect was also found using different modalities. Moreover, it was only observed when the memory array was presented simultaneously, but not sequentially. These results were explained by increased phasic alertness exerted by the larger visual display. The present study offers new evidence of ongoing attentional processes in the commonly-used change detection paradigm. Copyright © 2015 Elsevier B.V. All rights reserved.
Temporal production and visuospatial processing.
Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo
2005-12-01
Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.
Skalski, Linda M.; Towe, Sheri L.; Sikkema, Kathleen J.; Meade, Christina S.
2016-01-01
Background The most robust neurocognitive effect of marijuana use is memory impairment. Memory deficits are also high among persons living with HIV/AIDS, and marijuana is the most commonly used drug in this population. Yet research examining neurocognitive outcomes resulting from co-occurring marijuana and HIV is limited. Objective The primary objectives of this comprehensive review are to: (1) examine the literature on memory functioning in HIV-infected individuals; (2) examine the literature on memory functioning in marijuana users; (3) synthesize findings and propose a theoretical framework to guide future research. Method PubMed was searched for English publications 2000–2013. Twenty-two studies met inclusion criteria in the HIV literature, and 23 studies in the marijuana literature. Results Among HIV-infected individuals, memory deficits with medium to large effect sizes were observed. Marijuana users also demonstrated memory problems, but results were less consistent due to the diversity of samples. Conclusion A compensatory hypothesis, based on the cognitive aging literature, is proposed to provide a framework to explore the interaction between marijuana and HIV. There is some evidence that individuals infected with HIV recruit additional brain regions during memory tasks to compensate for HIV-related declines in neurocognitive functioning. Marijuana use causes impairment in similar brain systems, and thus it is hypothesized that the added neural strain of marijuana can exhaust neural resources, resulting in pronounced memory impairment. It will be important to test this hypothesis empirically, and future research priorities are discussed. PMID:27138170
A Cerebellar-model Associative Memory as a Generalized Random-access Memory
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1989-01-01
A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.
Strategic Regulation of Grain Size in Memory Reporting over Time
ERIC Educational Resources Information Center
Goldsmith, M.; Koriat, A.; Pansky, A.
2005-01-01
As time passes, people often remember the gist of an event though they cannot remember its details. Can rememberers exploit this difference by strategically regulating the ''grain size'' of their answers over time, to avoid reporting wrong information? A metacognitive model of the control of grain size in memory reporting was examined in two…
Shape-memory polymer foam device for treating aneurysms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortega, Jason M.; Benett, William J.; Small, Ward
A system for treating an aneurysm in a blood vessel or vein, wherein the aneurysm has a dome, an interior, and a neck. The system includes a shape memory polymer foam in the interior of the aneurysm between the dome and the neck. The shape memory polymer foam has pores that include a first multiplicity of pores having a first pore size and a second multiplicity of pores having a second pore size. The second pore size is larger than said first pore size. The first multiplicity of pores are located in the neck of the aneurysm. The second multiplicitymore » of pores are located in the dome of the aneurysm.« less
Does precision decrease with set size?
Mazyar, Helga; van den Berg, Ronald; Ma, Wei Ji
2012-01-01
The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations. PMID:22685337
NASA Astrophysics Data System (ADS)
Itter, M.; D'Orangeville, L.; Dawson, A.; Kneeshaw, D.; Finley, A. O.
2017-12-01
Drought and insect defoliation have lasting impacts on the dynamics of the boreal forest. Impacts are expected to worsen under global climate change as hotter, drier conditions forecast for much of the boreal increase the frequency and severity of drought and defoliation events. Contemporary ecological theory predicts physiological feedbacks in tree responses to drought and defoliation amplify impacts potentially causing large-scale productivity losses and forest mortality. Quantifying the interactive impacts of drought and insect defoliation on regional forest health is difficult given delayed and persistent responses to disturbance events. We developed a Bayesian hierarchical model to estimate forest growth responses to interactions between drought and insect defoliation by species and size class. Delayed and persistent responses to past drought and defoliation were quantified using empirical memory functions allowing for improved detection of interactions. The model was applied to tree-ring data from stands in Western (Alberta) and Eastern (Québec) regions of the Canadian boreal forest with different species compositions, disturbance regimes, and regional climates. Western stands experience chronic water deficit and forest tent caterpillar (FTC) defoliation; Eastern stands experience irregular water deficit and spruce budworm (SBW) defoliation. Ecosystem memory to past water deficit peaked in the year previous to growth and decayed to zero within 5 (West) to 8 (East) years; memory to past defoliation ranged from 8 (West) to 12 (East) years. The drier regional climate and faster FTC defoliation dynamics (compared to SBW) likely contribute to shorter ecosystem memory in the West. Drought and defoliation had the largest negative impact on large-diameter, host tree growth. Surprisingly, a positive interaction was observed between drought and defoliation for large-diameter, non-host trees likely due to reduced stand-level competition for water. Results highlight the temporal persistence of drought and defoliation stress on boreal forest growth dynamics and provide an empirical estimate of their interactive effects with explicit uncertainty.
Watier, Nicholas; Healy, Christopher; Armstrong, Heather
2017-04-01
Occasionally, individuals perceive that someone is no longer paying attention to the discussion at hand even when there are no overt cues of inattentiveness. As a preliminary study of this phenomenon, we examined whether pupil diameter might be implicitly used to infer others' attentiveness. Forty participants (27 women, 13 men, M age = 19.7 year, SD = 2.8) were presented with images of male faces with either large or small pupils, and, in the context of a personnel selection scenario, participants then judged the attentiveness of the person in the image. Images of faces with large pupils were judged as more attentive, compared with images of faces with small pupils. Face recognition memory performance was not affected by depicted pupil size. Our results are consistent with the proposal that pupillary fluctuations can be an index of perceived attention, and they provide preliminary evidence that pupil dilation may be implicitly relied upon to infer attentional states.
Electrical Switching of Perovskite Thin-Film Resistors
NASA Technical Reports Server (NTRS)
Liu, Shangqing; Wu, Juan; Ignatiev, Alex
2010-01-01
Electronic devices that exploit electrical switching of physical properties of thin films of perovskite materials (especially colossal magnetoresistive materials) have been invented. Unlike some related prior devices, these devices function at room temperature and do not depend on externally applied magnetic fields. Devices of this type can be designed to function as sensors (exhibiting varying electrical resistance in response to varying temperature, magnetic field, electric field, and/or mechanical pressure) and as elements of electronic memories. The underlying principle is that the application of one or more short electrical pulse(s) can induce a reversible, irreversible, or partly reversible change in the electrical, thermal, mechanical, and magnetic properties of a thin perovskite film. The energy in the pulse must be large enough to induce the desired change but not so large as to destroy the film. Depending on the requirements of a specific application, the pulse(s) can have any of a large variety of waveforms (e.g., square, triangular, or sine) and be of positive, negative, or alternating polarity. In some applications, it could be necessary to use multiple pulses to induce successive incremental physical changes. In one class of applications, electrical pulses of suitable shapes, sizes, and polarities are applied to vary the detection sensitivities of sensors. Another class of applications arises in electronic circuits in which certain resistance values are required to be variable: Incorporating the affected resistors into devices of the present type makes it possible to control their resistances electrically over wide ranges, and the lifetimes of electrically variable resistors exceed those of conventional mechanically variable resistors. Another and potentially the most important class of applications is that of resistance-based nonvolatile-memory devices, such as a resistance random access memory (RRAM) described in the immediately following article, Electrically Variable Resistive Memory Devices (MFS-32511-1).
Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal
2008-01-01
Motivation: UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. Application: We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. Results: We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. Availability: A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request. Contact: lonshy@cs.huji.ac.il PMID:18586742
Real Time Large Memory Optical Pattern Recognition.
1984-06-01
AD-Ri58 023 REAL TIME LARGE MEMORY OPTICAL PATTERN RECOGNITION(U) - h ARMY MISSILE COMMAND REDSTONE ARSENAL AL RESEARCH DIRECTORATE D A GREGORY JUN...TECHNICAL REPORT RR-84-9 Ln REAL TIME LARGE MEMORY OPTICAL PATTERN RECOGNITION Don A. Gregory Research Directorate US Army Missile Laboratory JUNE 1984 L...RR-84-9 , ___/_ _ __ _ __ _ __ _ __"__ _ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Real Time Large Memory Optical Pattern Technical
Parallel effects of memory set activation and search on timing and working memory capacity.
Schweickert, Richard; Fortin, Claudette; Xi, Zhuangzhuang; Viau-Quesnel, Charles
2014-01-01
Accurately estimating a time interval is required in everyday activities such as driving or cooking. Estimating time is relatively easy, provided a person attends to it. But a brief shift of attention to another task usually interferes with timing. Most processes carried out concurrently with timing interfere with it. Curiously, some do not. Literature on a few processes suggests a general proposition, the Timing and Complex-Span Hypothesis: A process interferes with concurrent timing if and only if process performance is related to complex span. Complex-span is the number of items correctly recalled in order, when each item presented for study is followed by a brief activity. Literature on task switching, visual search, memory search, word generation and mental time travel supports the hypothesis. Previous work found that another process, activation of a memory set in long term memory, is not related to complex-span. If the Timing and Complex-Span Hypothesis is true, activation should not interfere with concurrent timing in dual-task conditions. We tested such activation in single-task memory search task conditions and in dual-task conditions where memory search was executed with concurrent timing. In Experiment 1, activating a memory set increased reaction time, with no significant effect on time production. In Experiment 2, set size and memory set activation were manipulated. Activation and set size had a puzzling interaction for time productions, perhaps due to difficult conditions, leading us to use a related but easier task in Experiment 3. In Experiment 3 increasing set size lengthened time production, but memory activation had no significant effect. Results here and in previous literature on the whole support the Timing and Complex-Span Hypotheses. Results also support a sequential organization of activation and search of memory. This organization predicts activation and set size have additive effects on reaction time and multiplicative effects on percent correct, which was found.
Enabling Graph Appliance for Genome Assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun
2015-01-01
In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
Hua, Zhishan; Pal, Rohit; Srivannavit, Onnop; Burns, Mark A; Gulari, Erdogan
2008-03-01
This paper presents a novel optically addressed microactuator array (microfluidic "flash memory") with latched operation. Analogous to the address-data bus mediated memory address protocol in electronics, the microactuator array consists of individual phase-change based actuators addressed by localized heating through focused light patterns (address bus), which can be provided by a modified projector or high power laser pointer. A common pressure manifold (data bus) for the entire array is used to generate large deflections of the phase change actuators in the molten phase. The use of phase change material as the working media enables latched operation of the actuator array. After the initial light "writing" during which the phase is temporarily changed to molten, the actuated status is self-maintained by the solid phase of the actuator without power and pressure inputs. The microfluidic flash memory can be re-configured by a new light illumination pattern and common pressure signal. The proposed approach can achieve actuation of arbitrary units in a large-scale array without the need for complex external equipment such as solenoid valves and electrical modules, which leads to significantly simplified system implementation and compact system size. The proposed work therefore provides a flexible, energy-efficient, and low cost multiplexing solution for microfluidic applications based on physical displacements. As an example, the use of the latched microactuator array as "normally closed" or "normally open" microvalves is demonstrated. The phase-change wax is fully encapsulated and thus immune from contamination issues in fluidic environments.
Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile
NASA Astrophysics Data System (ADS)
Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.
2012-09-01
Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.
Jamieson, Matthew; Cullen, Breda; McGee-Lennon, Marilyn; Brewster, Stephen; Evans, Jonathan J
2014-01-01
Technology can compensate for memory impairment. The efficacy of assistive technology for people with memory difficulties and the methodology of selected studies are assessed. A systematic search was performed and all studies that investigated the impact of technology on memory performance for adults with impaired memory resulting from acquired brain injury (ABI) or a degenerative disease were included. Two 10-point scales were used to compare each study to an ideally reported single case experimental design (SCED) study (SCED scale; Tate et al., 2008 ) or randomised control group study (PEDro-P scale; Maher, Sherrington, Herbert, Moseley, & Elkins, 2003 ). Thirty-two SCED (mean = 5.9 on the SCED scale) and 11 group studies (mean = 4.45 on the PEDro-P scale) were found. Baseline and intervention performance for each participant in the SCED studies was re-calculated using non-overlap of all pairs (Parker & Vannest, 2009 ) giving a mean score of 0.85 on a 0 to 1 scale (17 studies, n = 36). A meta-analysis of the efficacy of technology vs. control in seven group studies gave a large effect size (d = 1.27) (n = 147). It was concluded that prosthetic technology can improve performance on everyday tasks requiring memory. There is a specific need for investigations of technology for people with degenerative diseases.
Kim, Ana; Fagan, Anne M; Goate, Alison M; Benzinger, Tammie LS; Morris, John C; Head, Denise
2015-01-01
Brain-derived neurotrophic factor (BDNF) has been shown to be important for neuronal survival and synaptic plasticity in the hippocampus in non-human animals. The Val66Met polymorphism in the BDNF gene, involving a valine (Val) to methionine (Met) substitution at codon 66, has been associated with lower BDNF secretion in vitro. However, there have been mixed results regarding associations between either circulating BDNF or the BDNF Val66Met polymorphism with hippocampal volume and memory in humans. The current study examined the association of BDNF genotype and plasma BDNF with hippocampal volume and memory in two large independent cohorts of middle-aged and older adults (both cognitively normal and early-stage dementia). Sample sizes ranged from 123 to 649. Measures of the BDNF genotype, plasma BDNF, MRI-based hippocampal volume and memory performance were obtained from the Knight Alzheimer Disease Research Center (ADRC) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI). There were no significant differences between BDNF Met+ and Met- groups on either hippocampal volume or memory in either cohort. In addition, plasma BDNF was not significantly associated with either hippocampal volume or memory in either cohort. Neither age, cognitive status nor gender moderated any of the relationships. Overall, current findings suggest that BDNF genotype and plasma BDNF may not be robust predictors for variance in hippocampal volume and memory in middle age and older adult cohorts. PMID:25784293
Kursawe, Michael A; Zimmer, Hubert D
2015-06-01
We investigated the impact of perceptual processing demands on visual working memory of coloured complex random polygons during change detection. Processing load was assessed by pupil size (Exp. 1) and additionally slow wave potentials (Exp. 2). Task difficulty was manipulated by presenting different set sizes (1, 2, 4 items) and by making different features (colour, shape, or both) task-relevant. Memory performance in the colour condition was better than in the shape and both condition which did not differ. Pupil dilation and the posterior N1 increased with set size independent of type of feature. In contrast, slow waves and a posterior P2 component showed set size effects but only if shape was task-relevant. In the colour condition slow waves did not vary with set size. We suggest that pupil size and N1 indicates different states of attentional effort corresponding to the number of presented items. In contrast, slow waves reflect processes related to encoding and maintenance strategies. The observation that their potentials vary with the type of feature (simple colour versus complex shape) indicates that perceptual complexity already influences encoding and storage and not only comparison of targets with memory entries at the moment of testing. Copyright © 2015 Elsevier B.V. All rights reserved.
Recall of briefly presented chess positions and its relation to chess skill.
Gong, Yanfei; Ericsson, K Anders; Moxley, Jerad H
2015-01-01
Individual differences in memory performance in a domain of expertise have traditionally been accounted for by previously acquired chunks of knowledge and patterns. These accounts have been examined experimentally mainly in chess. The role of chunks (clusters of chess pieces recalled in rapid succession during recall of chess positions) and their relations to chess skill are, however, under debate. By introducing an independent chunk-identification technique, namely repeated-recall technique, this study identified individual chunks for particular chess players. The study not only tested chess players with increasing chess expertise, but also tested non-chess players who should not have previously acquired any chess related chunks in memory. For recall of game positions significant differences between players and non-players were found in virtually all the characteristics of chunks recalled. Size of the largest chunks also correlates with chess skill within the group of rated chess players. Further research will help us understand how these memory encodings can explain large differences in chess skill.
Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer
2006-01-01
Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.
Hippocampal and diencephalic pathology in developmental amnesia.
Dzieciol, Anna M; Bachevalier, Jocelyne; Saleem, Kadharbatcha S; Gadian, David G; Saunders, Richard; Chong, W K Kling; Banks, Tina; Mishkin, Mortimer; Vargha-Khadem, Faraneh
2017-01-01
Developmental amnesia (DA) is a selective episodic memory disorder associated with hypoxia-induced bilateral hippocampal atrophy of early onset. Despite the systemic impact of hypoxia-ischaemia, the resulting brain damage was previously reported to be largely limited to the hippocampus. However, the thalamus and the mammillary bodies are parts of the hippocampal-diencephalic network and are therefore also at risk of injury following hypoxic-ischaemic events. Here, we report a neuroimaging investigation of diencephalic damage in a group of 18 patients with DA (age range 11-35 years), and an equal number of controls. Importantly, we uncovered a marked degree of atrophy in the mammillary bodies in two thirds of our patients. In addition, as a group, patients had mildly reduced thalamic volumes. The size of the anterior-mid thalamic (AMT) segment was correlated with patients' visual memory performance. Thus, in addition to the hippocampus, the diencephalic structures also appear to play a role in the patients' memory deficit. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Qian, Shi-Bing; Wang, Yong-Ping; Shao, Yan; Liu, Wen-Jun; Ding, Shi-Jin
2017-02-01
For the first time, the growth of Ni nanoparticles (NPs) was explored by plasma-assisted atomic layer deposition (ALD) technique using NiCp2 and NH3 precursors. Influences of substrate temperature and deposition cycles on ALD Ni NPs were studied by field emission scanning electron microscope and X-ray photoelectron spectroscopy. By optimizing the process parameters, high-density and uniform Ni NPs were achieved in the case of 280 °C substrate temperature and 50 deposition cycles, exhibiting a density of 1.5 × 1012 cm-2 and a small size of 3 4 nm. Further, the above Ni NPs were used as charge storage medium of amorphous indium-gallium-zinc oxide (a-IGZO) thin film transistor (TFT) memory, demonstrating a high storage capacity for electrons. In particular, the nonvolatile memory exhibited an excellent programming characteristic, e.g., a large threshold voltage shift of 8.03 V was obtained after being programmed at 17 V for 5 ms.
NASA Astrophysics Data System (ADS)
Shanahan, Daniel
2008-05-01
The memory loophole supposes that the measurement of an entangled pair is influenced by the measurements of earlier pairs in the same run of measurements. To assert the memory loophole is thus to deny that measurement is intrinsically random. It is argued that measurement might instead involve a process of recovery and equilibrium in the measuring apparatus akin to that described in thermodynamics by Le Chatelier's principle. The predictions of quantum mechanics would then arise from conservation of the measured property in the combined system of apparatus and measured ensemble. Measurement would be consistent with classical laws of conservation, not simply in the classical limit of large numbers, but whatever the size of the ensemble. However variances from quantum mechanical predictions would be self-correcting and centripetal, rather than Markovian and increasing as under the standard theory. Entanglement correlations would persist, not because the entangled particles act in concert (which would entail nonlocality), but because the measurements of the particles were influenced by the one fluctuating state of imbalance in the process of measurement.
Recall of Briefly Presented Chess Positions and Its Relation to Chess Skill
Moxley, Jerad H.
2015-01-01
Individual differences in memory performance in a domain of expertise have traditionally been accounted for by previously acquired chunks of knowledge and patterns. These accounts have been examined experimentally mainly in chess. The role of chunks (clusters of chess pieces recalled in rapid succession during recall of chess positions) and their relations to chess skill are, however, under debate. By introducing an independent chunk-identification technique, namely repeated-recall technique, this study identified individual chunks for particular chess players. The study not only tested chess players with increasing chess expertise, but also tested non-chess players who should not have previously acquired any chess related chunks in memory. For recall of game positions significant differences between players and non-players were found in virtually all the characteristics of chunks recalled. Size of the largest chunks also correlates with chess skill within the group of rated chess players. Further research will help us understand how these memory encodings can explain large differences in chess skill. PMID:25774693
Power and Performance Trade-offs for Space Time Adaptive Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino
Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less
Ultra low density biodegradable shape memory polymer foams with tunable physical properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singhal, Pooja; Wilson, Thomas S.; Cosgriff-Hernandez, Elizabeth
Compositions and/or structures of degradable shape memory polymers (SMPs) ranging in form from neat/unfoamed to ultra low density materials of down to 0.005 g/cc density. These materials show controllable degradation rate, actuation temperature and breadth of transitions along with high modulus and excellent shape memory behavior. A method of m ly low density foams (up to 0.005 g/cc) via use of combined chemical and physical aking extreme blowing agents, where the physical blowing agents may be a single compound or mixtures of two or more compounds, and other related methods, including of using multiple co-blowing agents of successively higher boilingmore » points in order to achieve a large range of densities for a fixed net chemical composition. Methods of optimization of the physical properties of the foams such as porosity, cell size and distribution, cell openness etc. of these materials, to further expand their uses and improve their performance.« less
NASA Astrophysics Data System (ADS)
Lim, Jae-Gab; Yang, Seung-Dong; Yun, Ho-Jin; Jung, Jun-Kyo; Park, Jung-Hyun; Lim, Chan; Cho, Gyu-seok; Park, Seong-gye; Huh, Chul; Lee, Hi-Deok; Lee, Ga-Won
2018-02-01
In this paper, SONOS-type flash memory device with highly improved charge-trapping efficiency is suggested by using silicon nanocrystals (Si-NCs) embedded in silicon nitride (SiNX) charge trapping layer. The Si-NCs were in-situ grown by PECVD without additional post annealing process. The fabricated device shows high program/erase speed and retention property which is suitable for multi-level cell (MLC) application. Excellent performance and reliability for MLC are demonstrated with large memory window of ∼8.5 V and superior retention characteristics of 7% charge loss for 10 years. High resolution transmission electron microscopy image confirms the Si-NC formation and the size is around 1-2 nm which can be verified again in X-ray photoelectron spectroscopy (XPS) where pure Si bonds increase. Besides, XPS analysis implies that more nitrogen atoms make stable bonds at the regular lattice point. Photoluminescence spectra results also illustrate that Si-NCs formation in SiNx is an effective method to form deep trap states.
Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui
2018-01-01
Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589
Feduccia, Allison A; Mithoefer, Michael C
2018-06-08
MDMA-assisted psychotherapy for treatment of PTSD has recently progressed to Phase 3 clinical trials and received Breakthrough Therapy designation by the FDA. MDMA used as an adjunct during psychotherapy sessions has demonstrated effectiveness and acceptable safety in reducing PTSD symptoms in Phase 2 trials, with durable remission of PTSD diagnosis in 68% of participants. The underlying psychological and neurological mechanisms for the robust effects in mitigating PTSD are being investigated in animal models and in studies of healthy volunteers. This review explores the potential role of memory reconsolidation and fear extinction during MDMA-assisted psychotherapy. MDMA enhances release of monoamines (serotonin, norepinephrine, dopamine), hormones (oxytocin, cortisol), and other downstream signaling molecules (BDNF) to dynamically modulate emotional memory circuits. By reducing activation in brain regions implicated in the expression of fear- and anxiety-related behaviors, namely the amygdala and insula, and increasing connectivity between the amygdala and hippocampus, MDMA may allow for reprocessing of traumatic memories and emotional engagement with therapeutic processes. Based on the pharmacology of MDMA and the available translational literature of memory reconsolidation, fear learning, and PTSD, this review suggests a neurobiological rationale to explain, at least in part, the large effect sizes demonstrated for MDMA in treating PTSD. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Sosic-Vasic, Zrinka; Hille, Katrin; Kröner, Julia; Spitzer, Manfred; Kornmeier, Jürgen
2018-01-01
Introduction: Consolidation is defined as the time necessary for memory stabilization after learning. In the present study we focused on effects of interference during the first 12 consolidation minutes after learning. Participants had to learn a set of German – Japanese word pairs in an initial learning task and a different set of German – Japanese word pairs in a subsequent interference task. The interference task started in different experimental conditions at different time points (0, 3, 6, and 9 min) after the learning task and was followed by subsequent cued recall tests. In a control experiment the interference periods were replaced by rest periods without any interference. Results: The interference task decreased memory performance by up to 20%, with negative effects at all interference time points and large variability between participants concerning both the time point and the size of maximal interference. Further, fast learners seem to be more affected by interference than slow learners. Discussion: Our results indicate that the first 12 min after learning are highly important for memory consolidation, without a general pattern concerning the precise time point of maximal interference across individuals. This finding raises doubts about the generalized learning recipes and calls for individuality of learning schedules. PMID:29503621
Torsion and bending properties of shape memory and superelastic nickel-titanium rotary instruments.
Ninan, Elizabeth; Berzins, David W
2013-01-01
Recently introduced into the market are shape memory nickel-titanium (NiTi) rotary files. The objective of this study was to investigate the torsion and bending properties of shape memory files (CM Wire, HyFlex CM, and Phoenix Flex) and compare them with conventional (ProFile ISO and K3) and M-Wire (GT Series X and ProFile Vortex) NiTi files. Sizes 20, 30, and 40 (n = 12/size/taper) of 0.02 taper CM Wire, Phoenix Flex, K3, and ProFile ISO and 0.04 taper HyFlex CM, ProFile ISO, GT Series X, and Vortex were tested in torsion and bending per ISO 3630-1 guidelines by using a torsiometer. All data were statistically analyzed by analysis of variance and the Tukey-Kramer test (P = .05) to determine any significant differences between the files. Significant interactions were present among factors of size and file. Variability in maximum torque values was noted among the shape memory files brands, sometimes exhibiting the greatest or least torque depending on brand, size, and taper. In general, the shape memory files showed a high angle of rotation before fracture but were not statistically different from some of the other files. However, the shape memory files were more flexible, as evidenced by significantly lower bending moments (P < .008). Shape memory files show greater flexibility compared with several other NiTi rotary file brands. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
`Unlearning' has a stabilizing effect in collective memories
NASA Astrophysics Data System (ADS)
Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.
1983-07-01
Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.
Thermally efficient and highly scalable In2Se3 nanowire phase change memory
NASA Astrophysics Data System (ADS)
Jin, Bo; Kang, Daegun; Kim, Jungsik; Meyyappan, M.; Lee, Jeong-Soo
2013-04-01
The electrical characteristics of nonvolatile In2Se3 nanowire phase change memory are reported. Size-dependent memory switching behavior was observed in nanowires of varying diameters and the reduction in set/reset threshold voltage was as low as 3.45 V/6.25 V for a 60 nm nanowire, which is promising for highly scalable nanowire memory applications. Also, size-dependent thermal resistance of In2Se3 nanowire memory cells was estimated with values as high as 5.86×1013 and 1.04×106 K/W for a 60 nm nanowire memory cell in amorphous and crystalline phases, respectively. Such high thermal resistances are beneficial for improvement of thermal efficiency and thus reduction in programming power consumption based on Fourier's law. The evaluation of thermal resistance provides an avenue to develop thermally efficient memory cell architecture.
Bemark, Mats; Bergqvist, Peter; Stensson, Anneli; Holmberg, Anna; Mattsson, Johan; Lycke, Nils Y
2011-02-01
Adjuvants have traditionally been appreciated for their immunoenhancing effects, whereas their impact on immunological memory has largely been neglected. In this paper, we have compared three mechanistically distinct adjuvants: aluminum salts (Alum), Ribi (monophosphoryl lipid A), and the cholera toxin A1 fusion protein CTA1-DD. Their influence on long-term memory development was dramatically different. Whereas a single immunization i.p. with 4-hydroxy-3-nitrophenyl acetyl (NP)-chicken γ-globulin and adjuvant stimulated serum anti-NP IgG titers that were comparable at 5 wk, CTA1-DD-adjuvanted responses were maintained for >16 mo with a half-life of anti-NP IgG ∼36 wk, but <15 wk after Ribi or Alum. A CTA1-DD dose-dependent increase in germinal center (GC) size and numbers was found, with >60% of splenic B cell follicles hosting GC at an optimal CTA1-DD dose. Roughly 7% of these GC were NP specific. This GC-promoting effect correlated well with the persistence of long-term plasma cells in the bone marrow and memory B cells in the spleen. CTA1-DD also facilitated increased somatic hypermutation and affinity maturation of NP-specific IgG Abs in a dose-dependent fashion, hence arguing that large GC not only promotes higher Ab titers but also high-quality Ab production. Adoptive transfer of splenic CD80(+), but not CD80(-), B cells, at 1 y after immunization demonstrated functional long-term anti-NP IgG and IgM memory cells. To our knowledge, this is the first report to specifically compare and document that adjuvants can differ considerably in their support of long-term immune responses. Differential effects on the GC reaction appear to be the basis for these differences.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, D. H.
1985-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, David H.
1987-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Numericware i: Identical by State Matrix Calculator
Kim, Bongsong; Beavis, William D
2017-01-01
We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375
Stability of discrete memory states to stochastic fluctuations in neuronal systems
Miller, Paul; Wang, Xiao-Jing
2014-01-01
Noise can degrade memories by causing transitions from one memory state to another. For any biological memory system to be useful, the time scale of such noise-induced transitions must be much longer than the required duration for memory retention. Using biophysically-realistic modeling, we consider two types of memory in the brain: short-term memories maintained by reverberating neuronal activity for a few seconds, and long-term memories maintained by a molecular switch for years. Both systems require persistence of (neuronal or molecular) activity self-sustained by an autocatalytic process and, we argue, that both have limited memory lifetimes because of significant fluctuations. We will first discuss a strongly recurrent cortical network model endowed with feedback loops, for short-term memory. Fluctuations are due to highly irregular spike firing, a salient characteristic of cortical neurons. Then, we will analyze a model for long-term memory, based on an autophosphorylation mechanism of calcium/calmodulin-dependent protein kinase II (CaMKII) molecules. There, fluctuations arise from the fact that there are only a small number of CaMKII molecules at each postsynaptic density (putative synaptic memory unit). Our results are twofold. First, we demonstrate analytically and computationally the exponential dependence of stability on the number of neurons in a self-excitatory network, and on the number of CaMKII proteins in a molecular switch. Second, for each of the two systems, we implement graded memory consisting of a group of bistable switches. For the neuronal network we report interesting ramping temporal dynamics as a result of sequentially switching an increasing number of discrete, bistable, units. The general observation of an exponential increase in memory stability with the system size leads to a trade-off between the robustness of memories (which increases with the size of each bistable unit) and the total amount of information storage (which decreases with increasing unit size), which may be optimized in the brain through biological evolution. PMID:16822041
Coordination of size-control, reproduction and generational memory in freshwater planarians
NASA Astrophysics Data System (ADS)
Yang, Xingbo; Kaj, Kelson J.; Schwab, David J.; Collins, Eva-Maria S.
2017-06-01
Uncovering the mechanisms that control size, growth, and division rates of organisms reproducing through binary division means understanding basic principles of their life cycle. Recent work has focused on how division rates are regulated in bacteria and yeast, but this question has not yet been addressed in more complex, multicellular organisms. We have, over the course of several years, assembled a unique large-scale data set on the growth and asexual reproduction of two freshwater planarian species, Dugesia japonica and Girardia tigrina, which reproduce by transverse fission and succeeding regeneration of head and tail pieces into new planarians. We show that generation-dependent memory effects in planarian reproduction need to be taken into account to accurately capture the experimental data. To achieve this, we developed a new additive model that mixes multiple size control strategies based on planarian size, growth, and time between divisions. Our model quantifies the proportions of each strategy in the mixed dynamics, revealing the ability of the two planarian species to utilize different strategies in a coordinated manner for size control. Additionally, we found that head and tail offspring of both species employ different mechanisms to monitor and trigger their reproduction cycles. Thus, we find a diversity of strategies not only between species but between heads and tails within species. Our additive model provides two advantages over existing 2D models that fit a multivariable splitting rate function to the data for size control: firstly, it can be fit to relatively small data sets and can thus be applied to systems where available data is limited. Secondly, it enables new biological insights because it explicitly shows the contributions of different size control strategies for each offspring type.
Coordination of size-control, reproduction and generational memory in freshwater planarians.
Yang, Xingbo; Kaj, Kelson J; Schwab, David J; Collins, Eva-Maria S
2017-05-23
Uncovering the mechanisms that control size, growth, and division rates of organisms reproducing through binary division means understanding basic principles of their life cycle. Recent work has focused on how division rates are regulated in bacteria and yeast, but this question has not yet been addressed in more complex, multicellular organisms. We have, over the course of several years, assembled a unique large-scale data set on the growth and asexual reproduction of two freshwater planarian species, Dugesia japonica and Girardia tigrina, which reproduce by transverse fission and succeeding regeneration of head and tail pieces into new planarians. We show that generation-dependent memory effects in planarian reproduction need to be taken into account to accurately capture the experimental data. To achieve this, we developed a new additive model that mixes multiple size control strategies based on planarian size, growth, and time between divisions. Our model quantifies the proportions of each strategy in the mixed dynamics, revealing the ability of the two planarian species to utilize different strategies in a coordinated manner for size control. Additionally, we found that head and tail offspring of both species employ different mechanisms to monitor and trigger their reproduction cycles. Thus, we find a diversity of strategies not only between species but between heads and tails within species. Our additive model provides two advantages over existing 2D models that fit a multivariable splitting rate function to the data for size control: firstly, it can be fit to relatively small data sets and can thus be applied to systems where available data is limited. Secondly, it enables new biological insights because it explicitly shows the contributions of different size control strategies for each offspring type.
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
Scaling of seismic memory with earthquake size
NASA Astrophysics Data System (ADS)
Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene
2012-07-01
It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.
Kamp, Siri-Maria; Brumback, Ty; Donchin, Emanuel
2013-11-01
We examined the degree to which ERP components elicited by items that are isolated from their context, either by their font size ("size isolates") or by their frequency of usage, are correlated with subsequent immediate recall. Study lists contained (a) 15 words including a size isolate, (b) 14 high frequency (HF) words with one low frequency word ("LF isolate"), or (c) 14 LF words with one HF word. We used spatiotemporal PCA to quantify ERP components. We replicated previously reported P300 subsequent memory effects for size isolates and found additional correlations with recall in the novelty P3, a right lateralized positivity, and a left lateralized slow wave that was distinct from the slow wave correlated with recall for nonisolates. LF isolates also showed evidence of a P300 subsequent memory effect and also elicited the left lateralized subsequent memory effect, supporting a role of distinctiveness in word frequency effects in recall. Copyright © 2013 Society for Psychophysiological Research.
A two-degrees-of-freedom miniature manipulator actuated by antagonistic shape memory alloys
NASA Astrophysics Data System (ADS)
Lai, Chih-Ming; Chu, Cheng-Yu; Lan, Chao-Chieh
2013-08-01
This paper presents a miniature manipulator that can provide rotations around two perpendicularly intersecting axes. Each axis is actuated by a pair of shape memory alloy (SMA) wires. SMA wire actuators are known for their large energy density and ease of actuation. These advantages make them ideal for applications that have stringent size and weight constraints. SMA actuators can be temperature-controlled to contract and relax like muscles. When correctly designed, antagonistic SMA actuators have a faster response and larger range of motion than bias-type SMA actuators. This paper proposes an antagonistic actuation model to determine the manipulator parameters that are required to generate sufficient workspace. Effects of SMA prestrain and spring stiffness on the manipulator are investigated. Taking advantage of proper prestrain, the actuator size can be made much smaller while maintaining the same motion. The use of springs in series with SMA can effectively reduce actuator stress. A controller and an anti-slack algorithm are developed to ensure fast and accurate motion. Speed, stress, and loading experiments are conducted to demonstrate the performance of the manipulator.
Zhou, Miaolei; Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.
Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model
Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730
Pupil size reflects successful encoding and recall of memory in humans.
Kucewicz, Michal T; Dolezal, Jaromir; Kremen, Vaclav; Berry, Brent M; Miller, Laura R; Magee, Abigail L; Fabian, Vratislav; Worrell, Gregory A
2018-03-21
Pupil responses are known to indicate brain processes involved in perception, attention and decision-making. They can provide an accessible biomarker of human memory performance and cognitive states in general. Here we investigated changes in the pupil size during encoding and recall of word lists. Consistent patterns in the pupil response were found across and within distinct phases of the free recall task. The pupil was most constricted in the initial fixation phase and was gradually more dilated through the subsequent encoding, distractor and recall phases of the task, as the word items were maintained in memory. Within the final recall phase, retrieving memory for individual words was associated with pupil dilation in absence of visual stimulation. Words that were successfully recalled showed significant differences in pupil response during their encoding compared to those that were forgotten - the pupil was more constricted before and more dilated after the onset of word presentation. Our results suggest pupil size as a potential biomarker for probing and modulation of memory processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gylenhaal, J.; Bronevetsky, G.
2007-05-25
CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less
Optimal neighborhood indexing for protein similarity search.
Peterlongo, Pierre; Noé, Laurent; Lavenier, Dominique; Nguyen, Van Hoa; Kucherov, Gregory; Giraud, Mathieu
2008-12-16
Similarity inference, one of the main bioinformatics tasks, has to face an exponential growth of the biological data. A classical approach used to cope with this data flow involves heuristics with large seed indexes. In order to speed up this technique, the index can be enhanced by storing additional information to limit the number of random memory accesses. However, this improvement leads to a larger index that may become a bottleneck. In the case of protein similarity search, we propose to decrease the index size by reducing the amino acid alphabet. The paper presents two main contributions. First, we show that an optimal neighborhood indexing combining an alphabet reduction and a longer neighborhood leads to a reduction of 35% of memory involved into the process, without sacrificing the quality of results nor the computational time. Second, our approach led us to develop a new kind of substitution score matrices and their associated e-value parameters. In contrast to usual matrices, these matrices are rectangular since they compare amino acid groups from different alphabets. We describe the method used for computing those matrices and we provide some typical examples that can be used in such comparisons. Supplementary data can be found on the website http://bioinfo.lifl.fr/reblosum. We propose a practical index size reduction of the neighborhood data, that does not negatively affect the performance of large-scale search in protein sequences. Such an index can be used in any study involving large protein data. Moreover, rectangular substitution score matrices and their associated statistical parameters can have applications in any study involving an alphabet reduction.
Optimal neighborhood indexing for protein similarity search
Peterlongo, Pierre; Noé, Laurent; Lavenier, Dominique; Nguyen, Van Hoa; Kucherov, Gregory; Giraud, Mathieu
2008-01-01
Background Similarity inference, one of the main bioinformatics tasks, has to face an exponential growth of the biological data. A classical approach used to cope with this data flow involves heuristics with large seed indexes. In order to speed up this technique, the index can be enhanced by storing additional information to limit the number of random memory accesses. However, this improvement leads to a larger index that may become a bottleneck. In the case of protein similarity search, we propose to decrease the index size by reducing the amino acid alphabet. Results The paper presents two main contributions. First, we show that an optimal neighborhood indexing combining an alphabet reduction and a longer neighborhood leads to a reduction of 35% of memory involved into the process, without sacrificing the quality of results nor the computational time. Second, our approach led us to develop a new kind of substitution score matrices and their associated e-value parameters. In contrast to usual matrices, these matrices are rectangular since they compare amino acid groups from different alphabets. We describe the method used for computing those matrices and we provide some typical examples that can be used in such comparisons. Supplementary data can be found on the website . Conclusion We propose a practical index size reduction of the neighborhood data, that does not negatively affect the performance of large-scale search in protein sequences. Such an index can be used in any study involving large protein data. Moreover, rectangular substitution score matrices and their associated statistical parameters can have applications in any study involving an alphabet reduction. PMID:19087280
A systematic review of cognitive performance in patients with childhood craniopharyngioma.
Özyurt, Jale; Müller, Hermann L; Thiel, Christiane M
2015-10-01
Craniopharyngiomas are rare brain tumors of the sellar/suprasellar region, often adversely affecting patients' physical and psychosocial functioning. Until a few years ago, knowledge on cognitive deficits in craniopharyngioma patients was based on little valid evidence, with considerable inconsistencies across studies. Findings from recent research, with partly larger sample sizes, add to existing evidence to provide a more clear and reliable picture. The current review aims to summarize and systemize current findings on cognitive deficits in childhood craniopharyngioma, taking account of patient- and treatment-related variables where possible. Those studies were included that reported results of childhood craniopharyngioma patients tested with formalized neuropsychological tests (irrespective of their age at study, group size ≥10). A systematic assignment of test results to subcomponents of broader cognitive domains (e.g. to specific memory systems and processes) allows for a first comprehensive overview of patterns of spared and impaired cognitive functions. We show that episodic memory recall in particular is impaired, largely sparing other memory components. In accordance with recent knowledge on mammillary function, patients with hypothalamic involvement appear to be at particular risk. Deficits in higher cognitive processes, relying on the integrity of the prefrontal cortex and its subcortical pathways, may also occur, but results are still inconsistent. To gain deeper insight into the pattern of deficits and their association with patient- and treatment-related variables, further multi-site research with larger cohorts is needed.
Monitoring the capacity of working memory: Executive control and effects of listening effort
Amichetti, Nicole M.; Stanley, Raymond S.; White, Alison G.
2013-01-01
In two experiments, we used an interruption-and-recall (IAR) task to explore listeners’ ability to monitor the capacity of working memory as new information arrived in real time. In this task, listeners heard recorded word lists with instructions to interrupt the input at the maximum point that would still allow for perfect recall. Experiment 1 demonstrated that the most commonly selected segment size closely matched participants’ memory span, as measured in a baseline span test. Experiment 2 showed that reducing the sound level of presented word lists to a suprathreshold but effortful listening level disrupted the accuracy of matching selected segment sizes with participants’ memory spans. The results are discussed in terms of whether online capacity monitoring may be subsumed under other, already enumerated working memory executive functions (inhibition, set shifting, and memory updating). PMID:23400826
Wesnes, Keith A; Aarsland, Dag; Ballard, Clive; Londos, Elisabet
2015-01-01
In both dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), attentional dysfunction is a core clinical feature together with disrupted episodic memory. This study evaluated the cognitive effects of memantine in DLB and PDD using automated tests of attention and episodic memory. A randomised double-blind, placebo-controlled, 24-week three centre trial of memantine (20 mg/day) was conducted in which tests of attention (simple and choice reaction time) and word recognition (immediate and delayed) from the CDR System were administered prior to dosing and again at 12 and 24 weeks. Although other results from this study have been published, the data from the CDR System tests were not included and are presented here for the first time. Data were available for 51 patients (21 DLB and 30 PDD). In both populations, memantine produced statistically significant medium to large effect sized improvements to choice reaction time, immediate and delayed word recognition. These are the first substantial improvements on cognitive tests of attention and episodic recognition memory identified with memantine in either DLB or PDD. Copyright © 2014 John Wiley & Sons, Ltd.
Selective weighting of action-related feature dimensions in visual working memory.
Heuer, Anna; Schubö, Anna
2017-08-01
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.
Visual Short-Term Memory Compared in Rhesus Monkeys and Humans
Elmore, L. Caitlin; Ma, Wei Ji; Magnotti, John F.; Leising, Kenneth J.; Passaro, Antony D.; Katz, Jeffrey S.; Wright, Anthony A.
2011-01-01
Summary Change detection is a popular task to study visual short-term memory (STM) in humans [1–4]. Much of this work suggests that STM has a fixed capacity of 4 ± 1 items [1–6]. Here we report the first comparison of change detection memory between humans and a species closely related to humans, the rhesus monkey. Monkeys and humans were tested in nearly identical procedures with overlapping display sizes. Although the monkeys’ STM was well fit by a 1-item fixed-capacity memory model, other monkey memory tests with 4-item lists have shown performance impossible to obtain with a 1-item capacity [7]. We suggest that this contradiction can be resolved using a continuous-resource approach more closely tied to the neural basis of memory [8,9]. In this view, items have a noisy memory representation whose noise level depends on display size due to distributed allocation of a continuous resource. In accord with this theory, we show that performance depends on the perceptual distance between items before and after the change, and d′ depends on display size in an approximately power law fashion. Our results open the door to combining the power of psychophysics, computation, and physiology to better understand the neural basis of STM. PMID:21596568
Gupta, Varun K.; Pech, Ulrike; Fulterer, Andreas; Ender, Anatoli; Mauermann, Stephan F.; Andlauer, Till F. M.; Beuschel, Christine; Thriene, Kerstin; Quentin, Christine; Schwärzel, Martin; Mielke, Thorsten; Madeo, Frank; Dengjel, Joern; Fiala, André; Sigrist, Stephan J.
2016-01-01
Memories are assumed to be formed by sets of synapses changing their structural or functional performance. The efficacy of forming new memories declines with advancing age, but the synaptic changes underlying age-induced memory impairment remain poorly understood. Recently, we found spermidine feeding to specifically suppress age-dependent impairments in forming olfactory memories, providing a mean to search for synaptic changes involved in age-dependent memory impairment. Here, we show that a specific synaptic compartment, the presynaptic active zone (AZ), increases the size of its ultrastructural elaboration and releases significantly more synaptic vesicles with advancing age. These age-induced AZ changes, however, were fully suppressed by spermidine feeding. A genetically enforced enlargement of AZ scaffolds (four gene-copies of BRP) impaired memory formation in young animals. Thus, in the Drosophila nervous system, aging AZs seem to steer towards the upper limit of their operational range, limiting synaptic plasticity and contributing to impairment of memory formation. Spermidine feeding suppresses age-dependent memory impairment by counteracting these age-dependent changes directly at the synapse. PMID:27684064
NASA Astrophysics Data System (ADS)
Gao, Shuang; Zeng, Fei; Li, Fan; Wang, Minjuan; Mao, Haijun; Wang, Guangyue; Song, Cheng; Pan, Feng
2015-03-01
The search for self-rectifying resistive memories has aroused great attention due to their potential in high-density memory applications without additional access devices. Here we report the forming-free and self-rectifying bipolar resistive switching behavior of a simple Pt/TaOx/n-Si tri-layer structure. The forming-free phenomenon is attributed to the generation of a large amount of oxygen vacancies, in a TaOx region that is in close proximity to the TaOx/n-Si interface, via out-diffusion of oxygen ions from TaOx to n-Si. A maximum rectification ratio of ~6 × 102 is obtained when the Pt/TaOx/n-Si devices stay in a low resistance state, which originates from the existence of a Schottky barrier between the formed oxygen vacancy filament and the n-Si electrode. More importantly, numerical simulation reveals that the self-rectifying behavior itself can guarantee a maximum crossbar size of 212 × 212 (~44 kbit) on the premise of 10% read margin. Moreover, satisfactory switching uniformity and retention performance are observed based on this simple tri-layer structure. All of these results demonstrate the great potential of this simple Pt/TaOx/n-Si tri-layer structure for access device-free high-density memory applications.The search for self-rectifying resistive memories has aroused great attention due to their potential in high-density memory applications without additional access devices. Here we report the forming-free and self-rectifying bipolar resistive switching behavior of a simple Pt/TaOx/n-Si tri-layer structure. The forming-free phenomenon is attributed to the generation of a large amount of oxygen vacancies, in a TaOx region that is in close proximity to the TaOx/n-Si interface, via out-diffusion of oxygen ions from TaOx to n-Si. A maximum rectification ratio of ~6 × 102 is obtained when the Pt/TaOx/n-Si devices stay in a low resistance state, which originates from the existence of a Schottky barrier between the formed oxygen vacancy filament and the n-Si electrode. More importantly, numerical simulation reveals that the self-rectifying behavior itself can guarantee a maximum crossbar size of 212 × 212 (~44 kbit) on the premise of 10% read margin. Moreover, satisfactory switching uniformity and retention performance are observed based on this simple tri-layer structure. All of these results demonstrate the great potential of this simple Pt/TaOx/n-Si tri-layer structure for access device-free high-density memory applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06406b
A Low-Power Instruction Issue Queue for Microprocessors
NASA Astrophysics Data System (ADS)
Watanabe, Shingo; Chiyonobu, Akihiro; Sato, Toshinori
Instruction issue queue is a key component which extracts instruction level parallelism (ILP) in modern out-of-order microprocessors. In order to exploit ILP for improving processor performance, instruction queue size should be increased. However, it is difficult to increase the size, since instruction queue is implemented by a content addressable memory (CAM) whose power and delay are much large. This paper introduces a low power and scalable instruction queue that replaces the CAM with a RAM. In this queue, instructions are explicitly woken up. Evaluation results show that the proposed instruction queue decreases processor performance by only 1.9% on average. Furthermore, the total energy consumption is reduced by 54% on average.
Schedulers with load-store queue awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.
2017-02-07
In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
Schedulers with load-store queue awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.
2017-01-24
In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
A Reaction-Diffusion Model for Synapse Growth and Long-Term Memory
NASA Astrophysics Data System (ADS)
Liu, Kang; Lisman, John; Hagan, Michael
Memory storage involves strengthening of synaptic transmission known as long-term potentiation (LTP). The late phase of LTP is associated with structural processes that enlarge the synapse. Yet, synapses must be stable, despite continual subunit turnover, over the lifetime of an encoded memory. These considerations suggest that synapses are variable-size stable structure (VSSS), meaning they can switch between multiple metastable structures with different sizes. The mechanisms underlying VSSS are poorly understood. While experiments and theory have suggested that the interplay between diffusion and receptor-scaffold interactions can lead to a preferred stable size for synaptic domains, such a mechanism cannot explain how synapses adopt widely different sizes. Here we develop a minimal reaction-diffusion model of VSSS for synapse growth, incorporating the recent observation from super-resolution microscopy that neural activity can build compositional heterogeneities within synaptic domains. We find that introducing such heterogeneities can change the stable domain size in a controlled manner. We discuss a potential connection between this model and experimental data on synapse sizes, and how it provides a possible mechanism to structurally encode graded long-term memory. We acknowledge the support from NSF INSPIRE Award number IOS-1526941 (KL, MFH, JL) and the Brandeis Center for Bioinspired Soft Materials, an NSF MRSEC, DMR- 1420382 (MFH).
Yang, Hui-Ling; Chan, Pi-Tuan; Chang, Pi-Chen; Chiu, Huei-Ling; Sheen Hsiao, Shu-Tai; Chu, Hsin; Chou, Kuei-Ru
2018-02-01
A better understanding of people with cognitive disorders improves performance on memory tasks through memory-focused interventions are needed. The purpose of this study was to assess the effect of memoryfocused interventions on cognitive disorders through a meta-analysis. Systematic review and meta-analysis. The online electronic databases PubMed, the Cochrane Library, Ovid-Medline, CINHAL, PsycINFO, Ageline, and Embase (up to May 2017) were used in this study. No language restriction was applied to the search. Objective memory (learning and memory function, immediate recall, delayed recall, and recognition) was the primary indicator and subjective memory performance, global cognitive function, and depression were the secondary indicators. The Hedges' g of change, subgroup analyses, and meta-regression were analyzed on the basis of the characteristics of people with cognitive disorders. A total of 27 studies (2177 participants, mean age=75.80) reporting RCTs were included in the meta-analysis. The results indicated a medium-to-large effect of memory-focused interventions on learning and memory function (Hedges' g=0.62) and subjective memory performance (Hedges' g=0.67), a small-to-medium effect on delayed recall and depression, and a small effect on immediate recall and global cognitive function (all p<0.05) compared with the control. Subgroup analysis and meta-regression indicated that the effects on learning and memory function were more profound in the format of memory training, individual training, shorter treatment duration, and more than eight treatment sessions, and the effect size indicated the MMSE score was the most crucial indicator (β=-0.06, p=0.04). This is first comprehensive meta-analysis of special memory domains in people with cognitive disorders. The results revealed that memory-focused interventions effectively improved memory-related performance in people with cognitive disorders. An appropriately designed intervention can effectively improve memory function, reduce disability progression, and improve mood state in people with cognitive disorders. Additional randomized controlled trials including measures of recognition, global cognitive function, and depression should be conducted and analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Word length, set size, and lexical factors: Re-examining what causes the word length effect.
Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian
2018-04-19
The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
X-Eye: a novel wearable vision system
NASA Astrophysics Data System (ADS)
Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye
2011-03-01
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.
Naming game with biased assimilation over adaptive networks
NASA Astrophysics Data System (ADS)
Fu, Guiyuan; Zhang, Weidong
2018-01-01
The dynamics of two-word naming game incorporating the influence of biased assimilation over adaptive network is investigated in this paper. Firstly an extended naming game with biased assimilation (NGBA) is proposed. The hearer in NGBA accepts the received information in a biased manner, where he may refuse to accept the conveyed word from the speaker with a predefined probability, if the conveyed word is different from his current memory. Secondly, the adaptive network is formulated by rewiring the links. Theoretical analysis is developed to show that the population in NGBA will eventually reach global consensus on either A or B. Numerical simulation results show that the larger strength of biased assimilation on both words, the slower convergence speed, while larger strength of biased assimilation on only one word can slightly accelerate the convergence; larger population size can make the rate of convergence slower to a large extent when it increases from a relatively small size, while such effect becomes minor when the population size is large; the behavior of adaptively reconnecting the existing links can greatly accelerate the rate of convergence especially on the sparse connected network.
Hetero-association for pattern translation
NASA Astrophysics Data System (ADS)
Yu, Francis T. S.; Lu, Thomas T.; Yang, Xiangyang
1991-09-01
A hetero-association neural network using an interpattern association algorithm is presented. By using simple logical rules, hetero-association memory can be constructed based on the association between the input-output reference patterns. For optical implementation, a compact size liquid crystal television neural network is used. Translations between the English letters and the Chinese characters as well as Arabic and Chinese numerics are demonstrated. The authors have shown that the hetero-association model can perform more effectively in comparison to the Hopfield model in retrieving large numbers of similar patterns.
Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.
Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima
2014-10-14
Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.
Neuropsychological basic deficits in preschoolers at risk for ADHD: a meta-analysis.
Pauli-Pott, Ursula; Becker, Katja
2011-06-01
Widely accepted neuropsychological theories on attention deficit hyperactivity disorder (ADHD) assume that the complex symptoms of the disease arise from developmentally preceding neuropsychological basic deficits. These deficits in executive functions and delay aversion are presumed to emerge in the preschool period. The corresponding normative developmental processes include phases of relative stability and rapid change. These non-linear developmental processes might have implications for concurrent and predictive associations between basic deficits and ADHD symptoms. To derive a description of the nature and strength of these associations, a meta-analysis was conducted. It is assumed that weighted mean effect sizes differ between basic deficits and depend on age. The meta-analysis included 25 articles (n=3005 children) in which associations between assessments of basic deficits (i.e. response inhibition, interference control, delay aversion, working memory, flexibility, and vigilance/arousal) in the preschool period and concurrent or subsequent ADHD symptoms or diagnosis of ADHD had been analyzed. For response inhibition and delay aversion, mean effect sizes were of medium to large magnitude while the mean effect size for working memory was small. Meta-regression analyses revealed that effect sizes of delay aversion tasks significantly decreased with increasing age while effect sizes of interference control tasks and Continuous Performance Tests (CPTs) significantly increased. Depending on the normative maturational course of each skill, time windows might exist that allow for a more or less valid assessment of a specific deficit. In future research these time windows might help to describe early developing forms of ADHD and to identify children at risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
Monkey Visual Short-Term Memory Directly Compared to Humans
Elmore, L. Caitlin; Wright, Anthony A.
2015-01-01
Two adult rhesus monkeys were trained to detect which item in an array of memory items had changed using the same stimuli, viewing times, and delays as used with humans. Although the monkeys were extensively trained, they were less accurate than humans with the same array sizes (2, 4, & 6 items), with both stimulus types (colored squares, clip art), and showed calculated memory capacities of about one item (or less). Nevertheless, the memory results from both monkeys and humans for both stimulus types were well characterized by the inverse power-law of display size. This characterization provides a simple and straightforward summary of a fundamental process of visual short-term memory (how VSTM declines with memory load) that emphasizes species similarities based upon similar functional relationships. By more closely matching of monkey testing parameters to those of humans, the similar functional relationships strengthen the evidence suggesting similar processes underlying monkey and human VSTM. PMID:25706544
Koppel, Jonathan; Rubin, David C.
2016-01-01
The reminiscence bump is the increased proportion of autobiographical memories from youth and early adulthood observed in adults over 40. It is one of the most robust findings in autobiographical memory research. Although described as a single period of increased memories, a recent meta-analysis which reported the beginning and ending ages of the bump from individual studies found that different classes of cues produce distinct bumps that vary in size and temporal location. The bump obtained in response to cue words is both smaller and located earlier in the lifespan than the bump obtained when important memories are requested. The bump obtained in response to odor cues is even earlier. This variation in the size and location of the reminiscence bump argues for theories based primarily on retrieval rather than encoding and retention, which most current theories stress. Furthermore, it points to the need to develop theories of autobiographical memory that account for this flexibility in the memories retrieved. PMID:27141156
Does proactive interference play a significant role in visual working memory tasks?
Makovski, Tal
2016-10-01
Visual working memory (VWM) is an online memory buffer that is typically assumed to be immune to source memory confusions. Accordingly, the few studies that have investigated the role of proactive interference (PI) in VWM tasks found only a modest PI effect at best. In contrast, a recent study has found a substantial PI effect in that performance in a VWM task was markedly improved when all memory items were unique compared to the more standard condition in which only a limited set of objects was used. The goal of the present study was to reconcile this discrepancy between the findings, and to scrutinize the extent to which PI is involved in VWM tasks. Experiments 1-2 showed that the robust advantage in using unique memory items can also be found in a within-subject design and is largely independent of set size, encoding duration, or intertrial interval. Importantly, however, PI was found mainly when all items were presented at the same location, and the effect was greatly diminished when the items were presented, either simultaneously (Experiment 3) or sequentially (Experiments 4-5), at distinct locations. These results indicate that PI is spatially specific and that without the assistance of spatial information VWM is not protected from PI. Thus, these findings imply that spatial information plays a key role in VWM, and underscore the notion that VWM is more vulnerable to interference than is typically assumed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Asymptotic theory of circular polarization memory.
Dark, Julia P; Kim, Arnold D
2017-09-01
We establish a quantitative theory of circular polarization memory, which is the unexpected persistence of the incident circular polarization state in a strongly scattering medium. Using an asymptotic analysis of the three-dimensional vector radiative transfer equation (VRTE) in the limit of strong scattering, we find that circular polarization memory must occur in a boundary layer near the portion of the boundary on which polarized light is incident. The boundary layer solution satisfies a one-dimensional conservative scattering VRTE. Through a spectral analysis of this boundary layer problem, we introduce the dominant mode, which is the slowest-decaying mode in the boundary layer. To observe circular polarization memory for a particular set of optical parameters, we find that this dominant mode must pass three tests: (1) this dominant mode is given by the largest, discrete eigenvalue of a reduced problem that corresponds to Fourier mode k=0 in the azimuthal angle, and depends only on Stokes parameters U and V; (2) the polarization state of this dominant mode is largely circular polarized so that |V|≫|U|; and (3) the circular polarization of this dominant mode is maintained for all directions so that V is sign-definite. By applying these three tests to numerical calculations for monodisperse distributions of Mie scatterers, we determine the values of the size and relative refractive index when circular polarization memory occurs. In addition, we identify a reduced, scalar-like problem that provides an accurate approximation for the dominant mode when circular polarization memory occurs.
Effects of a multimedia project on users' knowledge about normal forgetting and serious memory loss.
Mahoney, Diane Feeney; Tarlow, Barbara J; Jones, Richard N; Sandaire, Johnny
2002-01-01
The aim of the project was to develop and evaluate the effectiveness of a CD-ROM-based multimedia program as a tool to increase user's knowledge about the differences between "normal" forgetfulness and more serious memory loss associated with Alzheimer's disease. The research was a controlled randomized study conducted with 113 adults who were recruited from the community and who expressed a concern about memory loss in a family member. The intervention group (n=56) viewed a module entitled "Forgetfulness: What's Normal and What's Not" on a laptop computer in their homes; the control group (n=57) did not. Both groups completed a 25-item knowledge-about-memory-loss test (primary outcome) and a sociodemographic and technology usage questionnaire; the intervention group also completed a CD-ROM user's evaluation. The mean (SD) number of correct responses to the knowledge test was 14.2 (4.5) for controls and 19.7 (3.1) for intervention participants. This highly significant difference (p<0.001) corresponds to a very large effect size. The program was most effective for participants with a lower level of self-reported prior knowledge about memory loss and Alzheimer's disease (p=0.02). Viewers were very satisfied with the program and felt that it was easy to use and understand. They particularly valued having personal access to a confidential source that permitted them to become informed about memory loss without public disclosure. This multimedia CD-ROM technology program provides an efficient and effective means of teaching older adults about memory loss and ways to distinguish benign from serious memory loss. It uniquely balances public community outreach education and personal privacy.
Papoutsi, Athanasia; Sidiropoulou, Kyriaki; Poirazi, Panayiota
2014-07-01
Technological advances have unraveled the existence of small clusters of co-active neurons in the neocortex. The functional implications of these microcircuits are in large part unexplored. Using a heavily constrained biophysical model of a L5 PFC microcircuit, we recently showed that these structures act as tunable modules of persistent activity, the cellular correlate of working memory. Here, we investigate the mechanisms that underlie persistent activity emergence (ON) and termination (OFF) and search for the minimum network size required for expressing these states within physiological regimes. We show that (a) NMDA-mediated dendritic spikes gate the induction of persistent firing in the microcircuit. (b) The minimum network size required for persistent activity induction is inversely proportional to the synaptic drive of each excitatory neuron. (c) Relaxation of connectivity and synaptic delay constraints eliminates the gating effect of NMDA spikes, albeit at a cost of much larger networks. (d) Persistent activity termination by increased inhibition depends on the strength of the synaptic input and is negatively modulated by dADP. (e) Slow synaptic mechanisms and network activity contain predictive information regarding the ability of a given stimulus to turn ON and/or OFF persistent firing in the microcircuit model. Overall, this study zooms out from dendrites to cell assemblies and suggests a tight interaction between dendritic non-linearities and network properties (size/connectivity) that may facilitate the short-memory function of the PFC.
Examining procedural working memory processing in obsessive-compulsive disorder.
Shahar, Nitzan; Teodorescu, Andrei R; Anholt, Gideon E; Karmon-Presser, Anat; Meiran, Nachshon
2017-07-01
Previous research has suggested that a deficit in working memory might underlie the difficulty of obsessive-compulsive disorder (OCD) patients to control their thoughts and actions. However, a recent meta-analyses found only small effect sizes for working memory deficits in OCD. Recently, a distinction has been made between declarative and procedural working memory. Working memory in OCD was tested mostly using declarative measurements. However, OCD symptoms typically concerns actions, making procedural working-memory more relevant. Here, we tested the operation of procedural working memory in OCD. Participants with OCD and healthy controls performed a battery of choice reaction tasks under high and low procedural working memory demands. Reaction-times (RT) were estimated using ex-Gaussian distribution fitting, revealing no group differences in the size of the RT distribution tail (i.e., τ parameter), known to be sensitive to procedural working memory manipulations. Group differences, unrelated to working memory manipulations, were found in the leading-edge of the RT distribution and analyzed using a two-stage evidence accumulation model. Modeling results suggested that perceptual difficulties might underlie the current group differences. In conclusion, our results suggest that procedural working-memory processing is most likely intact in OCD, and raise a novel, yet untested assumption regarding perceptual deficits in OCD. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Montejo Carrasco, Pedro; Montenegro-Peña, Mercedes; López-Higes, Ramón; Estrada, Eduardo; Prada Crespo, David; Montejo Rubio, Christian; García Azorín, David
(i) To analyze if general cognitive performance, perceived health and depression are predictors of Subjective Memory Complaints (SMC) contrasting their effect sizes; (ii) to analyze the relationship between SMC and objective memory by comparing a test that measures memory in daily life and a classical test of associated pairs; (iii) to examine if different subgroups, formed according to the MFE score, might have different behaviors regarding the studied variables. Sample: 3921 community-dwelling people (mean age 70.41±4.70) without cognitive impairment. Consecutive non-probabilistic recruitment. Mini Cognitive Exam (MCE), daily memory Rivermead Behavioural Memory Test (RBMT), Paired Associates Learning (PAL), Geriatric Depression Scale (GDS), Nottingham Health Profile (NHP). Dependent variable: Memory Failures Everyday Questionnaire (MFE). Two different dimensions to explain SMC were found: One subjective (MFE, GDS, NHP) and other objective (RBMT, PAL, MCE), the first more strongly associated with SMC. SMC predictors were NHP, GDS, RBMT and PAL, in this order according to effect size. Considering MFE scores we subdivided the sample into three groups (low, medium, higher scores): low MFE group was associated with GDS; medium, with GDS, NPH and RBMT, and higher, with age as well. Effect size for every variable tended to grow as the MFE score was higher. SMC were associated with both health profile and depressive symptoms and, in a lesser degree, with memory and overall cognitive performance. In people with fewer SMC, these are only associated with depressive symptomatology. More SMC are associated with depression, poor health perception and lower memory. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Modeling Distributions of Immediate Memory Effects: No Strategies Needed?
ERIC Educational Resources Information Center
Beaman, C. Philip; Neath, Ian; Surprenant, Aimee M.
2008-01-01
Many models of immediate memory predict the presence or absence of various effects, but none have been tested to see whether they predict an appropriate distribution of effect sizes. The authors show that the feature model (J. S. Nairne, 1990) produces appropriate distributions of effect sizes for both the phonological confusion effect and the…
Elliott, Madison; Parente, Frederick
2014-01-01
To examine the efficacy of cognitive rehabilitation strategies specifically designed to improve memory after traumatic brain injury (TBI) and stroke vs. memory improvement with the passage of time. A meta-analysis was performed on 26 studies of memory retraining and recovery that were published between the years of 1985 and 2013. Effect sizes (ESs) from each study were calculated and converted to Pearson's r and then analysed to assess the overall effect size and the relationship among the ESs, patient demographics and treatment interventions. RESULTS indicated a significant average ES (r = 0.51) in the treatment intervention conditions, as well as a significant average ES (r = 0.31) in the control conditions, in which participants did not receive any treatment. The largest ESs occurred in studies of stroke patients and studies concerning working memory rehabilitation. RESULTS showed that memory rehabilitation was an effective therapeutic intervention, especially for stroke patients and for working memory as a treatment domain. However, the results also indicated that significant memory improvement occurred spontaneously over time.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030; Ji, Weixiao
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm,more » which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.« less
LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.
El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher
2016-11-01
The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Efficient Aho-Corasick String Matching on Emerging Multicore Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Secchi, Simone
String matching algorithms are critical to several scientific fields. Beside text processing and databases, emerging applications such as DNA protein sequence analysis, data mining, information security software, antivirus, ma- chine learning, all exploit string matching algorithms [3]. All these applica- tions usually process large quantity of textual data, require high performance and/or predictable execution times. Among all the string matching algorithms, one of the most studied, especially for text processing and security applica- tions, is the Aho-Corasick algorithm. 1 2 Book title goes here Aho-Corasick is an exact, multi-pattern string matching algorithm which performs the search in a time linearlymore » proportional to the length of the input text independently from pattern set size. However, depending on the imple- mentation, when the number of patterns increase, the memory occupation may raise drastically. In turn, this can lead to significant variability in the performance, due to the memory access times and the caching effects. This is a significant concern for many mission critical applications and modern high performance architectures. For example, security applications such as Network Intrusion Detection Systems (NIDS), must be able to scan network traffic against very large dictionaries in real time. Modern Ethernet links reach up to 10 Gbps, and malicious threats are already well over 1 million, and expo- nentially growing [28]. When performing the search, a NIDS should not slow down the network, or let network packets pass unchecked. Nevertheless, on the current state-of-the-art cache based processors, there may be a large per- formance variability when dealing with big dictionaries and inputs that have different frequencies of matching patterns. In particular, when few patterns are matched and they are all in the cache, the procedure is fast. Instead, when they are not in the cache, often because many patterns are matched and the caches are continuously thrashed, they should be retrieved from the system memory and the procedure is slowed down by the increased latency. Efficient implementations of string matching algorithms have been the fo- cus of several works, targeting Field Programmable Gate Arrays [4, 25, 15, 5], highly multi-threaded solutions like the Cray XMT [34], multicore proces- sors [19] or heterogeneous processors like the Cell Broadband Engine [35, 22]. Recently, several researchers have also started to investigate the use Graphic Processing Units (GPUs) for string matching algorithms in security applica- tions [20, 10, 32, 33]. Most of these approaches mainly focus on reaching high peak performance, or try to optimize the memory occupation, rather than looking at performance stability. However, hardware solutions supports only small dictionary sizes due to lack of memory and are difficult to customize, while platforms such as the Cell/B.E. are very complex to program.« less
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.
Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal
2010-11-15
Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
Attention has memory: priming for the size of the attentional focus.
Fuggetta, Giorgio; Lanfranchi, Silvia; Campana, Gianluca
2009-01-01
Repeating the same target's features or spatial position, as well as repeating the same context (e.g. distractor sets) in visual search leads to a decrease of reaction times. This modulation can occur on a trial by trial basis (the previous trial primes the following one), but can also occur across multiple trials (i.e. performance in the current trial can benefit from features, position or context seen several trials earlier), and includes inhibition of different features, position or contexts besides facilitation of the same ones. Here we asked whether a similar implicit memory mechanism exists for the size of the attentional focus. By manipulating the size of the attentional focus with the repetition of search arrays with the same vs. different size, we found both facilitation for the same array size and inhibition for a different array size, as well as a progressive improvement in performance with increasing the number of repetition of search arrays with the same size. These results show that implicit memory for the size of the attentional focus can guide visual search even in the absence of feature or position priming, or distractor's contextual effects.
Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; He, Qinghua; Lei, Xuemei; Dong, Qi; Lin, Chongde
2016-11-01
There is a keen interest in identifying specific brain regions that are related to individual differences in true and false memories. Previous functional neuroimaging studies showed that activities in the hippocampus, right fusiform gyrus, and parahippocampal gyrus were associated with true and false memories, but no study thus far has examined whether the structures of these brain regions are associated with short-term and long-term true and false memories. To address that question, the current study analyzed data from 205 healthy young adults, who had valid data from both structural brain imaging and a misinformation task. In the misinformation task, subjects saw the crime scenarios, received misinformation, and took memory tests about the crimes an hour later and again after 1.5 years. Results showed that bilateral hippocampal volume was associated with short-term true and false memories, whereas right fusiform gyrus volume and surface area were associated with long-term true and false memories. This study provides the first evidence for the structural neural bases of individual differences in short-term and long-term true and false memories.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Transparent Large Strain Thermoplastic Polyurethane Magneto-Active Nanocomposites
NASA Technical Reports Server (NTRS)
Yoonessi, Mitra; Carpen, Ileana; Peck, John; Sola, Francisco; Bail, Justin; Lerch, Bradley; Meador, Michael
2010-01-01
Smart adaptive materials are an important class of materials which can be used in space deployable structures, morphing wings, and structural air vehicle components where remote actuation can improve fuel efficiency. Adaptive materials can undergo deformation when exposed to external stimuli such as electric fields, thermal gradients, radiation (IR, UV, etc.), chemical and electrochemical actuation, and magnetic field. Large strain, controlled and repetitive actuation are important characteristics of smart adaptive materials. Polymer nanocomposites can be tailored as shape memory polymers and actuators. Magnetic actuation of polymer nanocomposites using a range of iron, iron cobalt, and iron manganese nanoparticles is presented. The iron-based nanoparticles were synthesized using the soft template (1) and Sun's (2) methods. The nanoparticles shape and size were examined using TEM. The crystalline structure and domain size were evaluated using WAXS. Surface modifications of the nanoparticles were performed to improve dispersion, and were characterized with IR and TGA. TPU nanocomposites exhibited actuation for approximately 2wt% nanoparticle loading in an applied magnetic field. Large deformation and fast recovery were observed. These nanocomposites represent a promising potential for new generation of smart materials.
Parallel computing for probabilistic fatigue analysis
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.
1993-01-01
This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.
Grain size of recall practice for lengthy text material: fragile and mysterious effects on memory.
Wissman, Kathryn T; Rawson, Katherine A
2015-03-01
The current research evaluated the extent to which the grain size of recall practice for lengthy text material affects recall during practice and subsequent memory. The grain size hypothesis states that a smaller vs. larger grain size will increase retrieval success during practice that in turn will enhance subsequent memory for lengthy text material. Participants were prompted to recall directly after studying each section (section recall) or after all sections had been studied (whole-text recall) during practice, and then all participants completed a final test after a delay. Results across 7 experiments (including 587 participants and 1,394 recall protocols) partially disconfirmed the predictions of the grain size hypothesis: Although the smaller grain size produced sizable recall advantages during practice as expected (ds from 1.02 to 1.87 across experiments), the advantage was substantially or completely attenuated across a delay. Experiments 2-7 falsified several plausible methodological and theoretical explanations for the fragility of the effect, indicating that it was not due to particular text materials, retrieval from working memory during practice, the length of the retention interval, the spacing between study and practice recall, a disproportionate increase in recall of unimportant details, or a deficit in integration of ideas across text sections. In sum, results conclusively establish an initially sizable but mysteriously fragile effect of grain size, for which an explanation remains elusive. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Pigeon visual short-term memory directly compared to primates.
Wright, Anthony A; Elmore, L Caitlin
2016-02-01
Three pigeons were trained to remember arrays of 2-6 colored squares and detect which of two squares had changed color to test their visual short-term memory. Procedures (e.g., stimuli, displays, viewing times, delays) were similar to those used to test monkeys and humans. Following extensive training, pigeons performed slightly better than similarly trained monkeys, but both animal species were considerably less accurate than humans with the same array sizes (2, 4 and 6 items). Pigeons and monkeys showed calculated memory capacities of one item or less, whereas humans showed a memory capacity of 2.5 items. Despite the differences in calculated memory capacities, the pigeons' memory results, like those from monkeys and humans, were all well characterized by an inverse power-law function fit to d' values for the five display sizes. This characterization provides a simple, straightforward summary of the fundamental processing of visual short-term memory (how visual short-term memory declines with memory load) that emphasizes species similarities based upon similar functional relationships. By closely matching pigeon testing parameters to those of monkeys and humans, these similar functional relationships suggest similar underlying processes of visual short-term memory in pigeons, monkeys and humans. Copyright © 2015 Elsevier B.V. All rights reserved.
CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC
NASA Astrophysics Data System (ADS)
Poupat, Jean-Luc; Vitulli, Raffaele
2013-08-01
The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.
Curot, Jonathan; Busigny, Thomas; Valton, Luc; Denuelle, Marie; Vignal, Jean-Pierre; Maillard, Louis; Chauvel, Patrick; Pariente, Jérémie; Trebuchon, Agnès; Bartolomei, Fabrice; Barbeau, Emmanuel J
2017-07-01
Electrical brain stimulations (EBS) sometimes induce reminiscences, but it is largely unknown what type of memories they can trigger. We reviewed 80 years of literature on reminiscences induced by EBS and added our own database. We classified them according to modern conceptions of memory. We observed a surprisingly large variety of reminiscences covering all aspects of declarative memory. However, most were poorly detailed and only a few were episodic. This result does not support theories of a highly stable and detailed memory, as initially postulated, and still widely believed as true by the general public. Moreover, memory networks could only be activated by some of their nodes: 94.1% of EBS were temporal, although the parietal and frontal lobes, also involved in memory networks, were stimulated. The qualitative nature of memories largely depended on the site of stimulation: EBS to rhinal cortex mostly induced personal semantic reminiscences, while only hippocampal EBS induced episodic memories. This result supports the view that EBS can activate memory in predictable ways in humans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Optoelectronic fuzzy associative memory with controllable attraction basin sizes
NASA Astrophysics Data System (ADS)
Wen, Zhiqing; Campbell, Scott; Wu, Weishu; Yeh, Pochi
1995-10-01
We propose and demonstrate a new fuzzy associative memory model that provides an option to control the sizes of the attraction basins in neural networks. In our optoelectronic implementation we use spatial/polarization encoding to represent the fuzzy variables. Shadow casting of the encoded patterns is employed to yield the fuzzy-absolute difference between fuzzy variables.
Qian, Shi-Bing; Wang, Yong-Ping; Shao, Yan; Liu, Wen-Jun; Ding, Shi-Jin
2017-12-01
For the first time, the growth of Ni nanoparticles (NPs) was explored by plasma-assisted atomic layer deposition (ALD) technique using NiCp 2 and NH 3 precursors. Influences of substrate temperature and deposition cycles on ALD Ni NPs were studied by field emission scanning electron microscope and X-ray photoelectron spectroscopy. By optimizing the process parameters, high-density and uniform Ni NPs were achieved in the case of 280 °C substrate temperature and 50 deposition cycles, exhibiting a density of ~1.5 × 10 12 cm -2 and a small size of 3~4 nm. Further, the above Ni NPs were used as charge storage medium of amorphous indium-gallium-zinc oxide (a-IGZO) thin film transistor (TFT) memory, demonstrating a high storage capacity for electrons. In particular, the nonvolatile memory exhibited an excellent programming characteristic, e.g., a large threshold voltage shift of 8.03 V was obtained after being programmed at 17 V for 5 ms.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Origin of temperature and field dependence of magnetic skyrmion size in ultrathin nanodots
NASA Astrophysics Data System (ADS)
Tomasello, R.; Guslienko, K. Y.; Ricci, M.; Giordano, A.; Barker, J.; Carpentieri, M.; Chubykalo-Fesenko, O.; Finocchio, G.
2018-02-01
Understanding the physical properties of magnetic skyrmions is important for fundamental research with the aim to develop new spintronic device paradigms where both logic and memory can be integrated at the same level. Here, we show a universal model based on the micromagnetic formalism that can be used to study skyrmion stability as a function of magnetic field and temperature. We consider ultrathin, circular ferromagnetic magnetic dots. Our results show that magnetic skyrmions with a small radius—compared to the dot radius—are always metastable, while large radius skyrmions form a stable ground state. The change of energy profile determines the weak (strong) size dependence of the metastable (stable) skyrmion as a function of temperature and/or field.
NASA Technical Reports Server (NTRS)
Kanerva, P.
1986-01-01
To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
Cytomegalovirus Reinfections Stimulate CD8 T-Memory Inflation.
Trgovcich, Joanne; Kincaid, Michelle; Thomas, Alicia; Griessl, Marion; Zimmerman, Peter; Dwivedi, Varun; Bergdall, Valerie; Klenerman, Paul; Cook, Charles H
2016-01-01
Cytomegalovirus (CMV) has been shown to induce large populations of CD8 T-effector memory cells that unlike central memory persist in large quantities following infection, a phenomenon commonly termed "memory inflation". Although murine models to date have shown very large and persistent CMV-specific T-cell expansions following infection, there is considerable variability in CMV-specific T-memory responses in humans. Historically such memory inflation in humans has been assumed a consequence of reactivation events during the life of the host. Because basic information about CMV infection/re-infection and reactivation in immune competent humans is not available, we used a murine model to test how primary infection, reinfection, and reactivation stimuli influence memory inflation. We show that low titer infections induce "partial" memory inflation of both mCMV specific CD8 T-cells and antibody. We show further that reinfection with different strains can boost partial memory inflation. Finally, we show preliminary results suggesting that a single strong reactivation stimulus does not stimulate memory inflation. Altogether, our results suggest that while high titer primary infections can induce memory inflation, reinfections during the life of a host may be more important than previously appreciated.
A Lightweight White-Box Symmetric Encryption Algorithm against Node Capture for WSNs †
Shi, Yang; Wei, Wujing; He, Zongjian
2015-01-01
Wireless Sensor Networks (WSNs) are often deployed in hostile environments and, thus, nodes can be potentially captured by an adversary. This is a typical white-box attack context, i.e., the adversary may have total visibility of the implementation of the build-in cryptosystem and full control over its execution platform. Handling white-box attacks in a WSN scenario is a challenging task. Existing encryption algorithms for white-box attack contexts require large memory footprint and, hence, are not applicable for wireless sensor networks scenarios. As a countermeasure against the threat in this context, in this paper, we propose a class of lightweight secure implementations of the symmetric encryption algorithm SMS4. The basic idea of our approach is to merge several steps of the round function of SMS4 into table lookups, blended by randomly generated mixing bijections. Therefore, the size of the implementations are significantly reduced while keeping the same security efficiency. The security and efficiency of the proposed solutions are theoretically analyzed. Evaluation shows our solutions satisfy the requirement of sensor nodes in terms of limited memory size and low computational costs. PMID:26007737
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
Single event upset vulnerability of selected 4K and 16K CMOS static RAM's
NASA Technical Reports Server (NTRS)
Kolasinski, W. A.; Koga, R.; Blake, J. B.; Brucker, G.; Pandya, P.; Petersen, E.; Price, W.
1982-01-01
Upset thresholds for bulk CMOS and CMOS/SOS RAMS were deduced after bombardment of the devices with 140 MeV Kr, 160 MeV Ar, and 33 MeV O beams in a cyclotron. The trials were performed to test prototype devices intended for space applications, to relate feature size to the critical upset charge, and to check the validity of computer simulation models. The tests were run on 4 and 1 K memory cells with 6 transistors, in either hardened or unhardened configurations. The upset cross sections were calculated to determine the critical charge for upset from the soft errors observed in the irradiated cells. Computer simulations of the critical charge were found to deviate from the experimentally observed variation of the critical charge as the square of the feature size. Modeled values of series resistors decoupling the inverter pairs of memory cells showed that above some minimum resistance value a small increase in resistance produces a large increase in the critical charge, which the experimental data showed to be of questionable validity unless the value is made dependent on the maximum allowed read-write time.
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
Detection of memory loss of symmetry in the blockage of a turbulent flow within a duct
NASA Astrophysics Data System (ADS)
Santos, F. Rodrigues; da Silva Costa, G.; da Cunha Lima, A. T.; de Almeida, M. P.; da Cunha Lima, I. C.
This paper aims to detect memory loss of the symmetry of blockades in ducts and how far the information on the asymmetry of the obstacles travels in the turbulent flow from computational simulations with OpenFOAM. From a practical point of view, it seeks alternatives to detect the formation of obstructions in pipelines. The numerical solutions of the Navier-Stokes equations were obtained through the solver PisoFOAM of the OpenFOAM library, using the large Eddy simulation (LES) for the turbulent model. Obstructions were placed near the duct inlet and, keeping the blockade ratio fixed, five combinations for the obstacles sizes were adopted. The results show that the information about the symmetry is preserved for a larger distance near the ducts wall than in mid-channel. For an inlet velocity of 5m/s near the walls the memory is kept up to distance 40 times the duct width, while in mid-channel this distance is reduced almost by half. The maximum distance in which the symmetry breaking memory is preserved shows sensitivity to Reynolds number variations in regions near the duct walls, while in the mid channel that variations do not cause relevant effects to the velocity distribution.
Experiments and Analyses of Data Transfers Over Wide-Area Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
Dedicated wide-area network connections are increasingly employed in high-performance computing and big data scenarios. One might expect the performance and dynamics of data transfers over such connections to be easy to analyze due to the lack of competing traffic. However, non-linear transport dynamics and end-system complexities (e.g., multi-core hosts and distributed filesystems) can in fact make analysis surprisingly challenging. We present extensive measurements of memory-to-memory and disk-to-disk file transfers over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory-to-memory transfers, profiles of both TCP and UDT throughput as a function of RTT show concavemore » and convex regions; large buffer sizes and more parallel flows lead to wider concave regions, which are highly desirable. TCP and UDT both also display complex throughput dynamics, as indicated by their Poincare maps and Lyapunov exponents. For disk-to-disk transfers, we determine that high throughput can be achieved via a combination of parallel I/O threads, parallel network threads, and direct I/O mode. Our measurements also show that Lustre filesystems can be mounted over long-haul connections using LNet routers, although challenges remain in jointly optimizing file I/O and transport method parameters to achieve peak throughput.« less
Attention mediates the flexible allocation of visual working memory resources.
Emrich, Stephen M; Lockhart, Holly A; Al-Aidroos, Naseem
2017-07-01
Though it is clear that it is impossible to store an unlimited amount of information in visual working memory (VWM), the limiting mechanisms remain elusive. While several models of VWM limitations exist, these typically characterize changes in performance as a function of the number of to-be-remembered items. Here, we examine whether changes in spatial attention could better account for VWM performance, independent of load. Across 2 experiments, performance was better predicted by the prioritization of memory items (i.e., attention) than by the number of items to be remembered (i.e., memory load). This relationship followed a power law, and held regardless of whether performance was assessed based on overall precision or any of 3 measures in a mixture model. Moreover, at large set sizes, even minimally attended items could receive a small proportion of resources, without any evidence for a discrete-capacity on the number of items that could be maintained in VWM. Finally, the observed data were best fit by a variable-precision model in which response error was related to the proportion of resources allocated to each item, consistent with a model of VWM in which performance is determined by the continuous allocation of attentional resources during encoding. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina
2011-11-01
Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.
Visual feature binding in younger and older adults: encoding and suffix interference effects.
Brown, Louise A; Niven, Elaine H; Logie, Robert H; Rhodes, Stephen; Allen, Richard J
2017-02-01
Three experiments investigated younger (18-25 yrs) and older (70-88 yrs) adults' temporary memory for colour-shape combinations (binding). We focused upon estimating the magnitude of the binding cost for each age group across encoding time (Experiment 1; 900/1500 ms), presentation format (Experiment 2; simultaneous/sequential), and interference (Experiment 3; control/suffix) conditions. In Experiment 1, encoding time did not differentially influence binding in the two age groups. In Experiment 2, younger adults exhibited poorer binding performance with sequential relative to simultaneous presentation, and serial position analyses highlighted a particular age-related difficulty remembering the middle item of a series (for all memory conditions). Experiments 1-3 demonstrated small to medium binding effect sizes in older adults across all encoding conditions, with binding less accurate than shape memory. However, younger adults also displayed negative effects of binding (small to large) in two of the experiments. Even when older adults exhibited a greater suffix interference effect in Experiment 3, this was for all memory types, not just binding. We therefore conclude that there is no consistent evidence for a visual binding deficit in healthy older adults. This relative preservation contrasts with the specific and substantial deficits in visual feature binding found in several recent studies of Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Miyaji, Kousuke; Sun, Chao; Soga, Ayumi; Takeuchi, Ken
2014-01-01
A relational database management system (RDBMS) is designed based on NAND flash solid-state drive (SSD) for storage. By vertically integrating the storage engine (SE) and the flash translation layer (FTL), system performance is maximized and the internal SSD overhead is minimized. The proposed RDBMS SE utilizes physical information about the NAND flash memory which is supplied from the FTL. The query operation is also optimized for SSD. By these treatments, page-copy-less garbage collection is achieved and data fragmentation in the NAND flash memory is suppressed. As a result, RDBMS performance increases by 3.8 times, power consumption of SSD decreases by 46% and SSD life time is increased by 61%. The effectiveness of the proposed scheme increases with larger erase block sizes, which matches the future scaling trend of three-dimensional (3D-) NAND flash memories. The preferable row data size of the proposed scheme is below 500 byte for 16 kbyte page size.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
ERIC Educational Resources Information Center
Sorqvist, Patrik; Ronnberg, Jerker
2012-01-01
Purpose: To investigate whether working memory capacity (WMC) modulates the effects of to-be-ignored speech on the memory of materials conveyed by to-be-attended speech. Method: Two tasks (reading span, Daneman & Carpenter, 1980; Ronnberg et al., 2008; and size-comparison span, Sorqvist, Ljungberg, & Ljung, 2010) were used to measure individual…
Linking Working Memory and Long-Term Memory: A Computational Model of the Learning of New Words
ERIC Educational Resources Information Center
Jones, Gary; Gobet, Fernand; Pine, Julian M.
2007-01-01
The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory…
Oscillatory mechanisms of process binding in memory.
Klimesch, Wolfgang; Freunberger, Roman; Sauseng, Paul
2010-06-01
A central topic in cognitive neuroscience is the question, which processes underlie large scale communication within and between different neural networks. The basic assumption is that oscillatory phase synchronization plays an important role for process binding--the transient linking of different cognitive processes--which may be considered a special type of large scale communication. We investigate this question for memory processes on the basis of different types of oscillatory synchronization mechanisms. The reviewed findings suggest that theta and alpha phase coupling (and phase reorganization) reflect control processes in two large memory systems, a working memory and a complex knowledge system that comprises semantic long-term memory. It is suggested that alpha phase synchronization may be interpreted in terms of processes that coordinate top-down control (a process guided by expectancy to focus on relevant search areas) and access to memory traces (a process leading to the activation of a memory trace). An analogous interpretation is suggested for theta oscillations and the controlled access to episodic memories. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns
NASA Technical Reports Server (NTRS)
Shaeffer, John
2008-01-01
Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.
LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly.
Bonizzoni, Paola; Vedova, Gianluca Della; Pirola, Yuri; Previtali, Marco; Rizzi, Raffaella
2016-03-01
The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows-Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012 ), a state-of-the-art string graph-based assembler, and uses BEETL for indexing the input data. LSG is open source software and is available online. We have analyzed our implementation on a 875-million read whole-genome dataset, on which LSG has built the string graph using only 1GB of main memory (reducing the memory occupation by a factor of 50 with respect to SGA), while requiring slightly more than twice the time than SGA. The analysis of the entire pipeline shows an important decrease in memory usage, while managing to have only a moderate increase in the running time.
The acute effects of cannabinoids on memory in humans: a review.
Ranganathan, Mohini; D'Souza, Deepak Cyril
2006-11-01
Cannabis is one of the most frequently used substances. Cannabis and its constituent cannabinoids are known to impair several aspects of cognitive function, with the most robust effects on short-term episodic and working memory in humans. A large body of the work in this area occurred in the 1970s before the discovery of cannabinoid receptors. Recent advances in the knowledge of cannabinoid receptors' function have rekindled interest in examining effects of exogenous cannabinoids on memory and in understanding the mechanism of these effects. The literature about the acute effects of cannabinoids on memory tasks in humans is reviewed. The limitations of the human literature including issues of dose, route of administration, small sample sizes, sample selection, effects of other drug use, tolerance and dependence to cannabinoids, and the timing and sensitivity of psychological tests are discussed. Finally, the human literature is discussed against the backdrop of preclinical findings. Acute administration of Delta-9-THC transiently impairs immediate and delayed free recall of information presented after, but not before, drug administration in a dose- and delay-dependent manner. In particular, cannabinoids increase intrusion errors. These effects are more robust with the inhaled and intravenous route and correspond to peak drug levels. This profile of effects suggests that cannabinoids impair all stages of memory including encoding, consolidation, and retrieval. Several mechanisms, including effects on long-term potentiation and long-term depression and the inhibition of neurotransmitter (GABA, glutamate, acetyl choline, dopamine) release, have been implicated in the amnestic effects of cannabinoids. Future research in humans is necessary to characterize the neuroanatomical and neurochemical basis of the memory impairing effects of cannabinoids, to dissect out their effects on the various stages of memory and to bridge the expanding gap between the humans and preclinical literature.
Associative memory advantage in grapheme-color synesthetes compared to older, but not young adults
Pfeifer, Gaby; Rothen, Nicolas; Ward, Jamie; Chan, Dennis; Sigala, Natasha
2014-01-01
People with grapheme-color synesthesia perceive enriched experiences of colors in response to graphemes (letters, digits). In this study, we examined whether these synesthetes show a generic associative memory advantage for stimuli that do not elicit a synesthetic color. We used a novel between group design (14 young synesthetes, 14 young, and 14 older adults) with a self-paced visual associative learning paradigm and subsequent retrieval (immediate and delayed). Non-synesthesia inducing, achromatic fractal pair-associates were manipulated in visual similarity (high and low) and corresponded to high and low memory load conditions. The main finding was a learning and retrieval advantage of synesthetes relative to older, but not to younger, adults. Furthermore, the significance testing was supported with effect size measures and power calculations. Differences between synesthetes and older adults were found during dissimilar pair (high memory load) learning and retrieval at immediate and delayed stages. Moreover, we found a medium size difference between synesthetes and young adults for similar pair (low memory load) learning. Differences between young and older adults were also observed during associative learning and retrieval, but were of medium effect size coupled with low power. The results show a subtle associative memory advantage in synesthetes for non-synesthesia inducing stimuli, which can be detected against older adults. They also indicate that perceptual mechanisms (enhanced in synesthesia, declining as part of the aging process) can translate into a generic associative memory advantage, and may contribute to associative deficits accompanying healthy aging. PMID:25071664
Dynamics of social contagions with memory of nonredundant information
NASA Astrophysics Data System (ADS)
Wang, Wei; Tang, Ming; Zhang, Hai-Feng; Lai, Ying-Cheng
2015-07-01
A key ingredient in social contagion dynamics is reinforcement, as adopting a certain social behavior requires verification of its credibility and legitimacy. Memory of nonredundant information plays an important role in reinforcement, which so far has eluded theoretical analysis. We first propose a general social contagion model with reinforcement derived from nonredundant information memory. Then, we develop a unified edge-based compartmental theory to analyze this model, and a remarkable agreement with numerics is obtained on some specific models. We use a spreading threshold model as a specific example to understand the memory effect, in which each individual adopts a social behavior only when the cumulative pieces of information that the individual received from his or her neighbors exceeds an adoption threshold. Through analysis and numerical simulations, we find that the memory characteristic markedly affects the dynamics as quantified by the final adoption size. Strikingly, we uncover a transition phenomenon in which the dependence of the final adoption size on some key parameters, such as the transmission probability, can change from being discontinuous to being continuous. The transition can be triggered by proper parameters and structural perturbations to the system, such as decreasing individuals' adoption threshold, increasing initial seed size, or enhancing the network heterogeneity.
The contents of visual working memory reduce uncertainty during visual search.
Cosman, Joshua D; Vecera, Shaun P
2011-05-01
Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.
Multitasking the Davidson algorithm for the large, sparse eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umar, V.M.; Fischer, C.F.
1989-01-01
The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range ofmore » matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.« less
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Three-dimensional electrical resistivity model of a nuclear waste disposal site
NASA Astrophysics Data System (ADS)
Rucker, Dale F.; Levitt, Marc T.; Greenwood, William J.
2009-12-01
A three-dimensional (3D) modeling study was completed on a very large electrical resistivity survey conducted at a nuclear waste site in eastern Washington. The acquisition included 47 pole-pole two-dimensional (2D) resistivity profiles collected along parallel and orthogonal lines over an area of 850 m × 570 m. The data were geo-referenced and inverted using EarthImager3D (EI3D). EI3D runs on a Microsoft 32-bit operating system (e.g. WIN-2K, XP) with a maximum usable memory of 2 GB. The memory limits the size of the domain for the inversion model to 200 m × 200 m, based on the survey electrode density. Therefore, a series of increasing overlapping models were run to evaluate the effectiveness of dividing the survey area into smaller subdomains. The results of the smaller subdomains were compared to the inversion results of a single domain over a larger area using an upgraded form of EI3D that incorporates multi-processing capabilities and 32 GB of RAM memory. The contours from the smaller subdomains showed discontinuity at the boundaries between the adjacent models, which do not match the hydrogeologic expectations given the nature of disposal at the site. At several boundaries, the contours of the low resistivity areas close, leaving the appearance of disconnected plumes or open contours at boundaries are not met with a continuance of the low resistivity plume into the adjacent subdomain. The model results of the single large domain show a continuous monolithic plume within the central and western portion of the site, directly beneath the elongated trenches. It is recommended that where possible, the domain not be subdivided, but instead include as much of the domain as possible given the memory of available computing resources.
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
Bistable electroactive polymer for refreshable Braille display with improved actuation stability
NASA Astrophysics Data System (ADS)
Niu, Xiaofan; Brochu, Paul; Stoyanov, Hristiyan; Yun, Sung Ryul; Pei, Qibing
2012-04-01
Poly(t-butyl acrylate) is a bistable electroactive polymer (BSEP) capable of rigid-to-rigid actuation. The BSEP combines the large-strain actuation of dielectric elastomers with shape memory property. We have introduced a material approach to overcome pull-in instability in poly(t-butyl acrylate) that significantly improves the actuation lifetime at strains greater than 100%. Refreshable Braille display devices with size of a smartphone screen have been fabricated to manifest a potential application of the BSEP. We will report the testing results of the devices by a Braille user.
VOTable JAVA Streaming Writer and Applications.
NASA Astrophysics Data System (ADS)
Kulkarni, P.; Kembhavi, A.; Kale, S.
2004-07-01
Virtual Observatory related tools use a new standard for data transfer called the VOTable format. This is a variant of the xml format that enables easy transfer of data over the web. We describe a streaming interface that can bridge the VOTable format, through a user friendly graphical interface, with the FITS and ASCII formats, which are commonly used by astronomers. A streaming interface is important for efficient use of memory because of the large size of catalogues. The tools are developed in JAVA to provide a platform independent interface. We have also developed a stand-alone version that can be used to convert data stored in ASCII or FITS format on a local machine. The Streaming writer is successfully being used in VOPlot (See Kale et al 2004 for a description of VOPlot).We present the test results of converting huge FITS and ASCII data into the VOTable format on machines that have only limited memory.
NASA Astrophysics Data System (ADS)
Chumlyakov, Yu. I.; Kireeva, I. V.; Kretinina, I. V.; Keinikh, K. S.; Kuts, O. A.; Kirillov, V. A.; Karaman, I.; Maier, H.
2013-12-01
Using single crystals of a Fe - 28% Ni - 17% Co - 11.5% Al - 25% Ta (аt.%) alloy, oriented for tensile loading along the [001] direction, the shape-memory (SME) and superelasticity (SE) effects caused by reversible thermoelastic martensitic transformations (MTs) from a high-temperature fcc-phase into a bctmartensite are investigated. It is demonstrated that the conditions necessary for the thermoelastic MTs to occur are achieved by aging at 973 K within the time interval (t) from 0.5 to 7.0 hours, which is accompanied by precipitation of the γ'-phase particles, (FeNiCo)3(AlTa), whose d < 8-12 nm. When the size of the γ'-precipitates becomes as large as d ≥ 8-12 nm, the MT becomes partially reversible. The physical causes underlying the kinetics of thermoelstic reversible fcc-bct MTs are discussed.
Memory-Efficient Onboard Rock Segmentation
NASA Technical Reports Server (NTRS)
Burl, Michael C.; Thompson, David R.; Bornstein, Benjamin J.; deGranville, Charles K.
2013-01-01
Rockster-MER is an autonomous perception capability that was uploaded to the Mars Exploration Rover Opportunity in December 2009. This software provides the vision front end for a larger software system known as AEGIS (Autonomous Exploration for Gathering Increased Science), which was recently named 2011 NASA Software of the Year. As the first step in AEGIS, Rockster-MER analyzes an image captured by the rover, and detects and automatically identifies the boundary contours of rocks and regions of outcrop present in the scene. This initial segmentation step reduces the data volume from millions of pixels into hundreds (or fewer) of rock contours. Subsequent stages of AEGIS then prioritize the best rocks according to scientist- defined preferences and take high-resolution, follow-up observations. Rockster-MER has performed robustly from the outset on the Mars surface under challenging conditions. Rockster-MER is a specially adapted, embedded version of the original Rockster algorithm ("Rock Segmentation Through Edge Regrouping," (NPO- 44417) Software Tech Briefs, September 2008, p. 25). Although the new version performs the same basic task as the original code, the software has been (1) significantly upgraded to overcome the severe onboard re source limitations (CPU, memory, power, time) and (2) "bulletproofed" through code reviews and extensive testing and profiling to avoid the occurrence of faults. Because of the limited computational power of the RAD6000 flight processor on Opportunity (roughly two orders of magnitude slower than a modern workstation), the algorithm was heavily tuned to improve its speed. Several functional elements of the original algorithm were removed as a result of an extensive cost/benefit analysis conducted on a large set of archived rover images. The algorithm was also required to operate below a stringent 4MB high-water memory ceiling; hence, numerous tricks and strategies were introduced to reduce the memory footprint. Local filtering operations were re-coded to operate on horizontal data stripes across the image. Data types were reduced to smaller sizes where possible. Binary- valued intermediate results were squeezed into a more compact, one-bit-per-pixel representation through bit packing and bit manipulation macros. An estimated 16-fold reduction in memory footprint relative to the original Rockster algorithm was achieved. The resulting memory footprint is less than four times the base image size. Also, memory allocation calls were modified to draw from a static pool and consolidated to reduce memory management overhead and fragmentation. Rockster-MER has now been run onboard Opportunity numerous times as part of AEGIS with exceptional performance. Sample results are available on the AEGIS website at http://aegis.jpl.nasa.gov.
Neurocognitive dysfunction in subjects at clinical high risk for psychosis: A meta-analysis.
Zheng, Wei; Zhang, Qing-E; Cai, Dong-Bin; Ng, Chee H; Ungvari, Gabor S; Ning, Yu-Ping; Xiang, Yu-Tao
2018-05-05
Findings of neurocognitive dysfunction in subjects at Clinical High Risk for Psychosis (CHR-P) have been controversial. This meta-analysis systematically examined studies of neurocognitive functions using the MATRICS Consensus Cognitive Battery (MCCB) in CHR-P. An independent literature search of both English and Chinese databases was conducted by two reviewers. Standardized mean difference (SMD) was calculated using a random effects model to evaluate the effect size of the meta-analytic results. Six case-control studies (n = 396) comparing neurocognitive functions between CHR-P subjects (n = 197) and healthy controls (n = 199) using the MCCB were identified; 4 (66.7%) studies were rated as "high quality". Compared to healthy controls, CHR-P subjects showed impairment with large effect size in overall cognition (n = 128, SMD = -1.00, 95%CI: -1.38, -0.63, P < 0.00001; I 2 = 2%), processing speed (SMD = -1.21) and attention/vigilance (SMD = -0.83), and with medium effect size in working memory (SMD = -0.76), reasoning and problem solving (SMD = -0.71), visual (SMD = -0.68) and verbal learning (SMD = -0.67). No significant difference between CHR-P subjects and controls was found regarding social cognition (SMD = -0.33, 95%CI: -0.76, 0.10, P = 0.14; I 2 = 70%) with small effect size. Apart from social cognition, CHR-P subjects performed worse than healthy control in all MCCB cognitive domains, particularly in processing speed, attention/vigilance and working memory. Copyright © 2018 Elsevier Ltd. All rights reserved.
High speed, very large (8 megabyte) first in/first out buffer memory (FIFO)
Baumbaugh, Alan E.; Knickerbocker, Kelly L.
1989-01-01
A fast FIFO (First In First Out) memory buffer capable of storing data at rates of 100 megabytes per second. The invention includes a data packer which concatenates small bit data words into large bit data words, a memory array having individual data storage addresses adapted to store the large bit data words, a data unpacker into which large bit data words from the array can be read and reconstructed into small bit data words, and a controller to control and keep track of the individual data storage addresses in the memory array into which data from the packer is being written and data to the unpacker is being read.
Exploring the effect of sleep and reduced interference on different forms of declarative memory.
Schönauer, Monika; Pawlizki, Annedore; Köck, Corinna; Gais, Steffen
2014-12-01
Many studies have found that sleep benefits declarative memory consolidation. However, fundamental questions on the specifics of this effect remain topics of discussion. It is not clear which forms of memory are affected by sleep and whether this beneficial effect is partly mediated by passive protection against interference. Moreover, a putative correlation between the structure of sleep and its memory-enhancing effects is still being discussed. In three experiments, we tested whether sleep differentially affects various forms of declarative memory. We varied verbal content (verbal/nonverbal), item type (single/associate), and recall mode (recall/recognition, cued/free recall) to examine the effect of sleep on specific memory subtypes. We compared within-subject differences in memory consolidation between intervals including sleep, active wakefulness, or quiet meditation, which reduced external as well as internal interference and rehearsal. Forty healthy adults aged 18-30 y, and 17 healthy adults aged 24-55 y with extensive meditation experience participated in the experiments. All types of memory were enhanced by sleep if the sample size provided sufficient statistical power. Smaller sample sizes showed an effect of sleep if a combined measure of different declarative memory scales was used. In a condition with reduced external and internal interference, performance was equal to one with high interference. Here, memory consolidation was significantly lower than in a sleep condition. We found no correlation between sleep structure and memory consolidation. Sleep does not preferentially consolidate a specific kind of declarative memory, but consistently promotes overall declarative memory formation. This effect is not mediated by reduced interference. © 2014 Associated Professional Sleep Societies, LLC.
Experimental Optoelectronic Associative Memory
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1992-01-01
Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.
Fan Size and Foil Type in Recognition Memory.
ERIC Educational Resources Information Center
Walls, Richard T.; And Others
An experiment involving 20 graduate and undergraduate students (7 males and 13 females) at West Virginia University (Morgantown) assessed "fan network structures" of recognition memory. A fan in network memory structure occurs when several facts are connected into a single node (concept). The more links from that concept to various…
A Memory-Based Model of Hick's Law
ERIC Educational Resources Information Center
Schneider, Darryl W.; Anderson, John R.
2011-01-01
We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…
Rodríguez, Rafael L; Briceño, R D; Briceño-Aguilar, Eduardo; Höbel, Gerlinde
2015-01-01
Nephila clavipes golden orb-web spiders accumulate prey larders on their webs and search for them if they are removed from their web. Spiders that lose larger larders (i.e., spiders that lose larders consisting of more prey items) search for longer intervals, indicating that the spiders form memories of the size of the prey larders they have accumulated, and use those memories to regulate recovery efforts when the larders are pilfered. Here, we ask whether the spiders represent prey counts (i.e., numerosity) or a continuous integration of prey quantity (mass) in their memories. We manipulated larder sizes in treatments that varied in either prey size or prey numbers but were equivalent in total prey quantity (mass). We then removed the larders to elicit searching and used the spiders' searching behavior as an assay of their representations in memory. Searching increased with prey quantity (larder size) and did so more steeply with higher prey counts than with single prey of larger sizes. Thus, Nephila spiders seem to track prey quantity in two ways, but to attend more to prey numerosity. We discuss alternatives for continuous accumulator mechanisms that remain to be tested against the numerosity hypothesis, and the evolutionary and adaptive significance of evidence suggestive of numerosity in a sit-and-wait invertebrate predator.
Working, declarative and procedural memory in specific language impairment
Lum, Jarrad A.G.; Conti-Ramsden, Gina; Page, Debra; Ullman, Michael T.
2012-01-01
According to the Procedural Deficit Hypothesis (PDH), abnormalities of brain structures underlying procedural memory largely explain the language deficits in children with specific language impairment (SLI). These abnormalities are posited to result in core deficits of procedural memory, which in turn explain the grammar problems in the disorder. The abnormalities are also likely to lead to problems with other, non-procedural functions, such as working memory, that rely at least partly on the affected brain structures. In contrast, declarative memory is expected to remain largely intact, and should play an important compensatory role for grammar. These claims were tested by examining measures of working, declarative and procedural memory in 51 children with SLI and 51 matched typically-developing (TD) children (mean age 10). Working memory was assessed with the Working Memory Test Battery for Children, declarative memory with the Children’s Memory Scale, and procedural memory with a visuo-spatial Serial Reaction Time task. As compared to the TD children, the children with SLI were impaired at procedural memory, even when holding working memory constant. In contrast, they were spared at declarative memory for visual information, and at declarative memory in the verbal domain after controlling for working memory and language. Visuo-spatial short-term memory was intact, whereas verbal working memory was impaired, even when language deficits were held constant. Correlation analyses showed neither visuo-spatial nor verbal working memory was associated with either lexical or grammatical abilities in either the SLI or TD children. Declarative memory correlated with lexical abilities in both groups of children. Finally, grammatical abilities were associated with procedural memory in the TD children, but with declarative memory in the children with SLI. These findings replicate and extend previous studies of working, declarative and procedural memory in SLI. Overall, we suggest that the evidence largely supports the predictions of the PDH. PMID:21774923
2005-04-06
Shape Memory Alloy - SMA wire Alloy: W6 Size: 0.20mm (as drawn 36% cold work, 0.0079") Manufacture date: 01/08/2009 Quantity: 36mm (120 ft) NiTi 16pt wire Shape Memory Alloy - SMA wire Alloy: W6 Size: 0.20mm (as drawn 36% cold work, 0.0079") Manufacture date: 01/08/2009 Quantity: 36mm (120 ft) NiTi 16pt wire
Manipulations of attention during eating and their effects on later snack intake.
Higgs, Suzanne
2015-09-01
Manipulation of attention during eating has been reported to affect later consumption via changes in meal memory. The aim of the present studies was to examine the robustness of these effects and investigate moderating factors. Across three studies, attention to eating was manipulated via distraction (via a computer game or TV watching) or focusing of attention to eating, and effects on subsequent snack consumption and meal memory were assessed. The participants were predominantly lean, young women students and the designs were between-subjects. Distraction increased later snack intake and this effect was larger when participants were more motivated to engage with the distracter and were offset when the distractor included food-related cues. Attention to eating reduced later snacking and this effect was larger when participants imagined eating from their own perspective than when they imagined eating from a third person perspective. Meal memory was impaired after distraction but focusing on eating did not affect later meal memory, possibly explained by ceiling effects for the memory measure. The pattern of results suggests that attention manipulations during eating have robust effects on later eating and the effect sizes are medium to large. The data are consistent with previous reports and add to the literature by suggesting that type of attention manipulation is important in determining effects on later eating. The results further suggest that attentive eating may be a useful target in interventions to help with appetite control. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sudo, Akihito; Sato, Akihiro; Hasegawa, Osamu
2009-06-01
Associative memory operating in a real environment must perform well in online incremental learning and be robust to noisy data because noisy associative patterns are presented sequentially in a real environment. We propose a novel associative memory that satisfies these requirements. Using the proposed method, new associative pairs that are presented sequentially can be learned accurately without forgetting previously learned patterns. The memory size of the proposed method increases adaptively with learning patterns. Therefore, it suffers neither redundancy nor insufficiency of memory size, even in an environment in which the maximum number of associative pairs to be presented is unknown before learning. Noisy inputs in real environments are classifiable into two types: noise-added original patterns and faultily presented random patterns. The proposed method deals with two types of noise. To our knowledge, no conventional associative memory addresses noise of both types. The proposed associative memory performs as a bidirectional one-to-many or many-to-one associative memory and deals not only with bipolar data, but also with real-valued data. Results demonstrate that the proposed method's features are important for application to an intelligent robot operating in a real environment. The originality of our work consists of two points: employing a growing self-organizing network for an associative memory, and discussing what features are necessary for an associative memory for an intelligent robot and proposing an associative memory that satisfies those requirements.
Method and apparatus for managing access to a memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik
A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less
Seo, Bo Am; Cho, Taesup; Lee, Daniel Z; Lee, Joong-Jae; Lee, Boyoung; Kim, Seong-Wook; Shin, Hee-Sup; Kang, Myoung-Goo
2018-06-18
Mutations in the human LARGE gene result in severe intellectual disability and muscular dystrophy. How LARGE mutation leads to intellectual disability, however, is unclear. In our proteomic study, LARGE was found to be a component of the AMPA-type glutamate receptor (AMPA-R) protein complex, a main player for learning and memory in the brain. Here, our functional study of LARGE showed that LARGE at the Golgi apparatus (Golgi) negatively controlled AMPA-R trafficking from the Golgi to the plasma membrane, leading to down-regulated surface and synaptic AMPA-R targeting. In LARGE knockdown mice, long-term potentiation (LTP) was occluded by synaptic AMPA-R overloading, resulting in impaired contextual fear memory. These findings indicate that the fine-tuning of AMPA-R trafficking by LARGE at the Golgi is critical for hippocampus-dependent memory in the brain. Our study thus provides insights into the pathophysiology underlying cognitive deficits in brain disorders associated with intellectual disability.
Self-folding with shape memory composites at the millimeter scale
NASA Astrophysics Data System (ADS)
Felton, S. M.; Becker, K. P.; Aukes, D. M.; Wood, R. J.
2015-08-01
Self-folding is an effective method for creating 3D shapes from flat sheets. In particular, shape memory composites—laminates containing shape memory polymers—have been used to self-fold complex structures and machines. To date, however, these composites have been limited to feature sizes larger than one centimeter. We present a new shape memory composite capable of folding millimeter-scale features. This technique can be activated by a global heat source for simultaneous folding, or by resistive heaters for sequential folding. It is capable of feature sizes ranging from 0.5 to 40 mm, and is compatible with multiple laminate compositions. We demonstrate the ability to produce complex structures and mechanisms by building two self-folding pieces: a model ship and a model bumblebee.
A quantitative meta-analysis of neurocognitive functioning in posttraumatic stress disorder
Scott, J. Cobb; Matt, Georg E.; Wrocklage, Kristen M.; Crnich, Cassandra; Jordan, Jessica; Southwick, Steven M.; Krystal, John H.; Schweinsburg, Brian C.
2014-01-01
Posttraumatic stress disorder (PTSD) is associated with regional alterations in brain structure and function that are hypothesized to contribute to symptoms and cognitive deficits associated with the disorder. We present here the first systematic meta-analysis of neurocognitive outcomes associated with PTSD to examine a broad range of cognitive domains and describe the profile of cognitive deficits, as well as modifying clinical factors and study characteristics. This report is based on data from 60 studies totaling 4,108 participants, including 1,779with PTSD, 1,446 trauma-exposed comparison participants, and 895 healthy comparison participants without trauma exposure. Effect size estimates were calculated using a mixed-effects meta-analysis for nine cognitive domains: attention/working memory, executive functions, verbal learning, verbal memory, visual learning, visual memory, language, speed of information processing, and visuospatial abilities. Analyses revealed significant neurocognitive effects associated with PTSD, although these ranged widely in magnitude, with the largest effect sizes in verbal learning (d =−.62), speed of information processing (d =−.59), attention/working memory (d =−.50), and verbal memory (d =−.46). Effect size estimates were significantly larger in treatment-seeking than community samples and in studies that did not exclude participants with attention-deficit hyperactivity disorder, and effect sizes were affected by between-group IQ discrepancies and the gender composition of the PTSD groups. Our findings indicate that consideration of neuropsychological functioning in attention, verbal memory, and speed of information processing may have important implications for the effective clinical management of persons with PTSD. Results are further discussed in the context of cognitive models of PTSD and the limitations of this literature. PMID:25365762
Memory access in shared virtual memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrendorf, R.
1992-01-01
Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.
Memory access in shared virtual memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrendorf, R.
1992-09-01
Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.
The contribution of stimulus frequency and recency to set-size effects.
van 't Wout, Félice
2018-06-01
Hick's law describes the increase in choice reaction time (RT) with the number of stimulus-response (S-R) mappings. However, in choice RT experiments, set-size is typically confounded with stimulus recency and frequency: With a smaller set-size, each stimulus occurs on average more frequently and more recently than with a larger set-size. To determine to what extent stimulus recency and frequency contribute to the set-size effect, stimulus set-size was manipulated independently of stimulus recency and frequency, by keeping recency and frequency constant for a subset of the stimuli. Although this substantially reduced the set-size effect (by approximately two-thirds for these stimuli), it did not eliminate it. Thus, the time required to retrieve an S-R mapping from memory is (at least in part) determined by the number of alternatives. In contrast, a recent task switching study (Van 't Wout et al. in Journal of Experimental Psychology: Learning, Memory & Cognition., 41, 363-376, 2015) using the same manipulation found that the time required to retrieve a task-set from memory is not influenced by the number of alternatives per se. Hence, this experiment further supports a distinction between two levels of representation in task-set control: The level of task-sets, and the level of S-R mappings.
Does length or neighborhood size cause the word length effect?
Jalbert, Annie; Neath, Ian; Surprenant, Aimée M
2011-10-01
Jalbert, Neath, Bireta, and Surprenant (2011) suggested that past demonstrations of the word length effect, the finding that words with fewer syllables are recalled better than words with more syllables, included a confound: The short words had more orthographic neighbors than the long words. The experiments reported here test two predictions that would follow if neighborhood size is a more important factor than word length. In Experiment 1, we found that concurrent articulation removed the effect of neighborhood size, just as it removes the effect of word length. Experiment 2 demonstrated that this pattern is also found with nonwords. For Experiment 3, we factorially manipulated length and neighborhood size, and found only effects of the latter. These results are problematic for any theory of memory that includes decay offset by rehearsal, but they are consistent with accounts that include a redintegrative stage that is susceptible to disruption by noise. The results also confirm the importance of lexical and linguistic factors on memory tasks thought to tap short-term memory.
Size effect and scaling power-law for superelasticity in shape-memory alloys at the nanoscale.
Gómez-Cortés, Jose F; Nó, Maria L; López-Ferreño, Iñaki; Hernández-Saz, Jesús; Molina, Sergio I; Chuvilin, Andrey; San Juan, Jose M
2017-08-01
Shape-memory alloys capable of a superelastic stress-induced phase transformation and a high displacement actuation have promise for applications in micro-electromechanical systems for wearable healthcare and flexible electronic technologies. However, some of the fundamental aspects of their nanoscale behaviour remain unclear, including the question of whether the critical stress for the stress-induced martensitic transformation exhibits a size effect similar to that observed in confined plasticity. Here we provide evidence of a strong size effect on the critical stress that induces such a transformation with a threefold increase in the trigger stress in pillars milled on [001] L2 1 single crystals from a Cu-Al-Ni shape-memory alloy from 2 μm to 260 nm in diameter. A power-law size dependence of n = -2 is observed for the nanoscale superelasticity. Our observation is supported by the atomic lattice shearing and an elastic model for homogeneous martensite nucleation.
Practical proof of CP element based design for 14nm node and beyond
NASA Astrophysics Data System (ADS)
Maruyama, Takashi; Takita, Hiroshi; Ikeno, Rimon; Osawa, Morimi; Kojima, Yoshinori; Sugatani, Shinji; Hoshino, Hiromi; Hino, Toshio; Ito, Masaru; Iizuka, Tetsuya; Komatsu, Satoshi; Ikeda, Makoto; Asada, Kunihiro
2013-03-01
To realize HVM (High Volume Manufacturing) with CP (Character Projection) based EBDW, the shot count reduction is the essential key. All device circuits should be composed with predefined character parts and we call this methodology "CP element based design". In our previous work, we presented following three concepts [2]. 1) Memory: We reported the prospects of affordability for the CP-stencil resource. 2) Logic cell: We adopted a multi-cell clustering approach in the physical synthesis. 3) Random interconnect: We proposed an ultra-regular layout scheme using fixed size wiring tiles containing repeated tracks and cutting points at the tile edges. In this paper, we will report the experimental proofs in these methodologies. In full chip layout, CP stencil resource management is critical key. From the MCC-POC (Proof of Concept) result [1], we assumed total available CP stencil resource as 9000um2. We should manage to layout all circuit macros within this restriction. Especially the issues in assignment of CP-stencil resource for the memory macros are the most important as they consume considerable degree of resource because of the various line-ups such as 1RW-, 2RW-SRAMs, Resister Files and ROM which require several varieties of large size peripheral circuits. Furthermore the memory macros typically take large area of more than 40% of die area in the forefront logic LSI products so that the shot count increase impact is serious. To realize CP-stencil resource saving we had constructed automatic CP analyzing system. We developed two types of extraction mode of simple division by block and layout repeatability recognition. By properly controlling these models based upon each peripheral circuit characteristics, we could minimize the consumption of CP stencil resources. The estimation for 14nm technology node had been performed based on the analysis of practical memory compiler. The required resource for memory macro is proved to be affordable value which is 60% of full CP stencil resource and wafer level converted shot count is proved to be the level which meets 100WPH throughput. In logic cell design, circuit performance verification result after the cell clustering has been estimated. The cell clustering by the acknowledgment of physical distance proved to owe large penalty mainly in the wiring length. To reduce this design penalty, we proposed CP cell clustering by the acknowledgment of logical distance. For shot-count reduction of random interconnect area design, we proposed a more structural routing architecture which consists of the track exchange and the via position arrangement. Putting these design approaches together, we can design CP stencils to hit the target throughput within the area constraint. From the analysis for other macros such as analog, I/O, and DUMMY, it has proved that we don't need special CP design approach than legacy pattern matching CP extraction. From all these experimental results we get good prospects to the reality of full CP element based layout.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
NASA Technical Reports Server (NTRS)
Muellerschoen, R. J.
1988-01-01
A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).
BigView Image Viewing on Tiled Displays
NASA Technical Reports Server (NTRS)
Sandstrom, Timothy
2007-01-01
BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.
Interpolation Approach To Computer-Generated Holograms
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko
1983-10-01
A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.
ERIC Educational Resources Information Center
Swanson, H. Lee; Zheng, Xinhua; Jerman, Olga
2009-01-01
The purpose of the present study was to synthesize research that compares children with and without reading disabilities (RD) on measures of short-term memory (STM) and working memory (WM). Across a broad age, reading, and IQ range, 578 effect sizes (ESs) were computed, yielding a mean ES across studies of -0.89 (SD = 1.03). A total of 257 ESs…
Impact of workstations on criticality analyses at ABB combustion engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.
1993-01-01
During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bitmore » word size and function as the computer servers and network administrative CPUS, providing a virtual memory system.« less
White, Corey N.; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M.; Ratcliff, Roger
2014-01-01
Recognition memory studies often find that emotional items are more likely than neutral items to be labeled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium, or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorized words were presented in the lists. Similar, though weaker, effects were observed with categorized words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership. PMID:24303902
White, Corey N; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M; Ratcliff, Roger
2014-01-01
Recognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorised words were presented in the lists. Similar, though weaker, effects were observed with categorised words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership.
An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients
NASA Astrophysics Data System (ADS)
Moser, Steven; Lee, Peter; Podoleanu, Adrian
2015-04-01
Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.
Vocabulary and Working Memory in Children Fit with Hearing Aids
ERIC Educational Resources Information Center
Stiles, Derek J.; McGregor, Karla K.; Bentler, Ruth A.
2012-01-01
Purpose: To determine whether children with mild-to-moderately severe sensorineural hearing loss (CHL) present with disturbances in working memory and whether these disturbances relate to the size of their receptive vocabularies. Method: Children 6 to 9 years of age participated. Aspects of working memory were tapped by articulation rate, forward…
Short-Term Memory in Orthogonal Neural Networks
NASA Astrophysics Data System (ADS)
White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim
2004-04-01
We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.
Meta-Analysis of Explicit Memory Studies in Populations with Intellectual Disability
ERIC Educational Resources Information Center
Lifshitz, Hefziba; Shtein, Sarit; Weiss, Izhak; Vakil, Eli
2011-01-01
This meta-analysis combines the effect size (ES) of 40 explicit memory experiments in populations with intellectual disability (ID). Eight meta-analyses were performed, as well as contrast tests between ES. The explicit memory of participants with ID was inferior to that of participants with typical development (TD). Relatively preserved explicit…
ERIC Educational Resources Information Center
Waring, Rebecca; Eadie, Patricia; Liow, Susan Rickard; Dodd, Barbara
2017-01-01
While little is known about why children make speech errors, it has been hypothesized that cognitive-linguistic factors may underlie phonological speech sound disorders. This study compared the phonological short-term and phonological working memory abilities (using immediate memory tasks) and receptive vocabulary size of 14 monolingual preschool…
Reading disabilities in children: A selective meta-analysis of the cognitive literature.
Kudo, Milagros F; Lussier, Cathy M; Swanson, H Lee
2015-05-01
This article synthesizes literature that compares the academic, cognitive, and behavioral performance of children with and without reading disabilities (RD). Forty-eight studies met the criteria for the meta-analysis, yielding 735 effect sizes (ESs) with an overall weighted ES of 0.98. Small to high ESs in favor of children without RD emerged on measures of cognition (rapid naming [ES = 0.89], phonological awareness [ES = 1.00], verbal working memory [ES = 0.79], short-term memory [ES = 0.56], visual-spatial memory [ES = 0.48], and executive processing [ES = 0.67]), academic achievement (pseudoword reading [ES = 1.85], math [ES = 1.20], vocabulary [ES = 0.83], spelling [ES = 1.25], and writing [ES = 1.20]), and behavior skills (ES = 0.80). Hierarchical linear modeling indicated that specific cognitive process measures (verbal working memory, visual-spatial memory, executive processing, and short-term memory) and intelligence measures (general and verbal intelligence) significantly moderated overall group effect size differences. Overall, the results supported the assumption that cognitive deficits in children with RD are persistent. Copyright © 2015. Published by Elsevier Ltd.
Meta-analysis of the association between rumination and reduced autobiographical memory specificity.
Chiu, Connie P Y; Griffith, James W; Lenaert, Bert; Raes, Filip; Hermans, Dirk; Barry, Tom J
2018-05-16
The CaRFAX model, proposed by Williams J. M. G. (2006. Capture and rumination, functional avoidance, and executive control (CaRFAX): Three processes that underlie overgeneral memory. Cognition and Emotion, 20, 548-568. doi: 10.1080/02699930500450465 ; Williams, J. M. G., Barnhofer, T., Crane, C., Herman, D., Raes, F., Watkins, E., & Dalgleish, T. (2007). Autobiographical memory specificity and emotional disorder. Psychological Bulletin, 133(1), 122-148. doi: 10.1037/0033-2909.133.1.122 ) posits that reduced autobiographical memory specificity, a key factor associated with the emergence and maintenance of emotional disorders, may result from heightened rumination. We provide the first meta-analysis of the relation between autobiographical memory specificity and trait rumination. PsycINFO, PsycARTICLES and MEDLINE databases were searched and the following were extracted: the correlation between the number of specific memories recalled in the Autobiographical Memory Test and self-reported trait rumination scores, and its sub-factors - brooding and reflection. The pooled effect size for the correlation between memory specificity and trait rumination was small (d = -.05) and did not differ significantly from zero (p = .09). The effect sizes for the correlation with brooding and reflection were not significantly different from zero. There is limited support for the association between trait rumination and memory specificity suggested in CaRFAX.
Automated quantitative muscle biopsy analysis system
NASA Technical Reports Server (NTRS)
Castleman, Kenneth R. (Inventor)
1980-01-01
An automated system to aid the diagnosis of neuromuscular diseases by producing fiber size histograms utilizing histochemically stained muscle biopsy tissue. Televised images of the microscopic fibers are processed electronically by a multi-microprocessor computer, which isolates, measures, and classifies the fibers and displays the fiber size distribution. The architecture of the multi-microprocessor computer, which is iterated to any required degree of complexity, features a series of individual microprocessors P.sub.n each receiving data from a shared memory M.sub.n-1 and outputing processed data to a separate shared memory M.sub.n+1 under control of a program stored in dedicated memory M.sub.n.
Animacy and real-world size shape object representations in the human medial temporal lobes.
Blumenthal, Anna; Stojanoski, Bobby; Martin, Chris B; Cusack, Rhodri; Köhler, Stefan
2018-06-26
Identifying what an object is, and whether an object has been encountered before, is a crucial aspect of human behavior. Despite this importance, we do not yet have a complete understanding of the neural basis of these abilities. Investigations into the neural organization of human object representations have revealed category specific organization in the ventral visual stream in perceptual tasks. Interestingly, these categories fall within broader domains of organization, with reported distinctions between animate, inanimate large, and inanimate small objects. While there is some evidence for category specific effects in the medial temporal lobe (MTL), in particular in perirhinal and parahippocampal cortex, it is currently unclear whether domain level organization is also present across these structures. To this end, we used fMRI with a continuous recognition memory task. Stimuli were images of objects from several different categories, which were either animate or inanimate, or large or small within the inanimate domain. We employed representational similarity analysis (RSA) to test the hypothesis that object-evoked responses in MTL structures during recognition-memory judgments also show evidence for domain-level organization along both dimensions. Our data support this hypothesis. Specifically, object representations were shaped by either animacy, real-world size, or both, in perirhinal and parahippocampal cortex, and the hippocampus. While sensitivity to these dimensions differed across structures when probed individually, hinting at interesting links to functional differentiation, similarities in organization across MTL structures were more prominent overall. These results argue for continuity in the organization of object representations in the ventral visual stream and the MTL. © 2018 Wiley Periodicals, Inc.
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.
de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica
2018-05-15
Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
Pakhomov, Serguei VS; Eberly, Lynn; Knopman, David
2016-01-01
A computational approach for estimating several indices of performance on the animal category verbal fluency task was validated, and examined in a large longitudinal study of aging. The performance indices included the traditional verbal fluency score, size of semantic clusters, density of repeated words, as well as measures of semantic and lexical diversity. Change over time in these measures was modeled using mixed effects regression in several groups of participants, including those that remained cognitively normal throughout the study (CN) and those that were diagnosed with mild cognitive impairment (MCI) or Alzheimer’s disease (AD) dementia at some point subsequent to the baseline visit. The results of the study show that, with the exception of mean cluster size, the indices showed significantly greater declines in the MCI and AD dementia groups as compared to CN participants. Examination of associations between the indices and cognitive domains of memory, attention and visuospatial functioning showed that the traditional verbal fluency scores were associated with declines in all three domains, whereas semantic and lexical diversity measures were associated with declines only in the visuospatial domain. Baseline repetition density was associated with declines in memory and visuospatial domains. Examination of lexical and semantic diversity measures in subgroups with high vs. low attention scores (but normal functioning in other domains) showed that the performance of individuals with low attention was influenced more by word frequency rather than strength of semantic relatedness between words. These findings suggest that various automatically semantic indices may be used to examine various aspects of cognitive performance affected by dementia. PMID:27245645
Strategies for reducing large fMRI data sets for independent component analysis.
Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R
2006-06-01
In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.
Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.
In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less
Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency
NASA Astrophysics Data System (ADS)
Soderquist, Peter; Leeser, Miriam E.
1999-01-01
Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.
The cognitive profile of myotonic dystrophy type 1: A systematic review and meta-analysis.
Okkersen, Kees; Buskes, Melanie; Groenewoud, Johannes; Kessels, Roy P C; Knoop, Hans; van Engelen, Baziel; Raaphorst, Joost
2017-10-01
To examine the cognitive profile of patients with myotonic dystrophy type 1 (DM1) on the basis of a systematic review and meta-analysis of the literature. Embase, Medline and PsycInfo were searched for studies reporting ≥1 neuropsychological test in both DM1 patients and healthy controls. Search, data extraction and risk of bias analysis were independently performed by two authors to minimize error. Neuropsychological tests were categorized into 12 cognitive domains and effect sizes (Hedges' g) were calculated for each domain and for tests administered in ≥5 studies. DM1 participants demonstrated a significantly worse performance compared to controls in all cognitive domains. Effect sizes ranged from -.33 (small) for verbal memory to -1.01 (large) for visuospatial perception. Except for the domains global cognition, intelligence and social cognition, wide confidence intervals (CIs) were associated with moderate to marked statistical heterogeneity that necessitates careful interpretation of results. Out of the individual tests, the Rey-Osterrieth complex figure-copy (both non-verbal memory and visuoconstruction) showed consistent impairment with acceptable heterogeneity. In DM1 patients, cognitive deficits may include a variable combination of global cognitive impairment with involvement across different domains, including social cognition, memory and visuospatial functioning. Although DM1 is a heterogeneous disorder, our study shows that meta-analysis is feasible, contributes to the understanding of brain involvement and may direct bedside testing. The protocol for this study has been registered in PROSPERO (International prospective register of systematic reviews) under ID: 42016037415. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ahern, Elayne; Semkovska, Maria
2017-01-01
Cognitive deficits are frequently observed in major depression. Yet, when these deficits emerge and how they relate to the depressed state is unclear. The aim of this 2-part systematic review and meta-analysis is to determine the pattern and extent of cognitive deficits during a first-episode of depression (FED) and their persistence following FED remission. Published, peer-reviewed articles on cognitive function in FED patients through October 2015 were searched. Meta-analyses with random-effects modeling were conducted. Part 1 assessed weighted, mean effect sizes of cognitive function in FED patients relative to healthy controls. Moderator analyses of clinical and demographical variables effects were conducted. Part 2 assessed weighted, mean effect sizes of change in cognitive function at remission compared with acute FED performance in longitudinal studies. Thirty-one studies including 994 FED patients were retained in Part 1. Relative to healthy controls, small to large impairments were observed across most cognitive domains. Remission was associated with a normalization of function in processing speed, learning and memory, autobiographical memory, shifting, and IQ. Lower FED age was associated with higher IQ, but more impairment in word-list delayed memory. Four studies including 92 FED patients were retained in Part 2. Following remission, FED patients showed small improvements in processing speed and shifting but persistent impairment in inhibition and verbal fluency. Significant cognitive deficits are already identifiable during a FED, with some functions showing persistent impairment upon remission. Clinicians must consider cognitive impairment alongside mood symptoms to ensure functional recovery from the FED. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Saklani, Reetu; Jaggi, Amteshwar; Singh, Nirmal
2010-07-01
We tested the neuroprotective effect of milrinone, a phosphodiesterase III inhibitor, in pharmacological preconditioning. Bilateral carotid artery occlusion for 12 min followed by reperfusion for 24 h produced ischemia-reperfusion (I/R) cerebral injury in male Swiss albino mice. Cerebral infarct size was measured using triphenyltetrazolium chloride staining. Memory was assessed using the Morris water maze test, and motor coordination was evaluated using the inclined beam walking test, rota-rod test, and lateral push test. Milrinone (50 microg/kg & 100 microg/kg i.v.) was administered 24 h before surgery in a separate group of animals to induce pharmacological preconditioning. I/R increased cerebral infarct size and impaired memory and motor coordination. Milrinone treatment significantly decreased cerebral infarct size and reversed I/R-induced impairments in memory and motor coordination. This neuroprotective effect was blocked by ruthenium red (3 mg/kg, s.c.), an intracellular ryanodine receptor blocker. These findings indicate that milrinone preconditioning exerts a marked neuroprotective effect on the ischemic brain, putatively due to increased intracellular calcium levels activating calcium-sensitive signal transduction cascades.
Prakash, Amit; Maikap, Siddheswar; Banerjee, Writam; Jana, Debanjan; Lai, Chao-Sung
2013-09-06
Improved switching characteristics were obtained from high-κ oxides AlOx, GdOx, HfOx, and TaOx in IrOx/high-κx/W structures because of a layer that formed at the IrOx/high-κx interface under external positive bias. The surface roughness and morphology of the bottom electrode in these devices were observed by atomic force microscopy. Device size was investigated using high-resolution transmission electron microscopy. More than 100 repeatable consecutive switching cycles were observed for positive-formatted memory devices compared with that of the negative-formatted devices (only five unstable cycles) because it contained an electrically formed interfacial layer that controlled 'SET/RESET' current overshoot. This phenomenon was independent of the switching material in the device. The electrically formed oxygen-rich interfacial layer at the IrOx/high-κx interface improved switching in both via-hole and cross-point structures. The switching mechanism was attributed to filamentary conduction and oxygen ion migration. Using the positive-formatted design approach, cross-point memory in an IrOx/AlOx/W structure was fabricated. This cross-point memory exhibited forming-free, uniform switching for >1,000 consecutive dc cycles with a small voltage/current operation of ±2 V/200 μA and high yield of >95% switchable with a large resistance ratio of >100. These properties make this cross-point memory particularly promising for high-density applications. Furthermore, this memory device also showed multilevel capability with a switching current as low as 10 μA and a RESET current of 137 μA, good pulse read endurance of each level (>105 cycles), and data retention of >104 s at a low current compliance of 50 μA at 85°C. Our improvement of the switching characteristics of this resistive memory device will aid in the design of memory stacks for practical applications.
Working memory deficit in patients with restless legs syndrome: an event-related potential study.
Kim, Sung Min; Choi, Jeong Woo; Lee, Chany; Lee, Byeong Uk; Koo, Yong Seo; Kim, Kyung Hwan; Jung, Ki-Young
2014-07-01
The aim of this study was to investigate whether there is a working memory (WM) deficit in restless legs syndrome (RLS) patients, by studying the Sternberg WM task of event-related potential (ERP). Thirteen drug-naive RLS patients and 13 healthy age-matched controls with no sleep disturbances participated in the present study. P300 ERP was recorded during Sternberg WM task using digits as mnemonic items. P300 amplitudes and reaction times were compared between groups (RLS vs. control) considering brain regions (frontal, central, and parietal) and memory load sizes (two, three, and four) as within-subject factors. Clinical and sleep-related variables were correlated with P300 amplitude. The reaction time in RLS patients was significantly longer than controls over all memory load sizes. The P300 amplitude at parietal regions in RLS patients was significantly lower than in controls regardless of memory load sizes, which was significantly negatively correlated with duration of RLS history in RLS patients. Our study suggests that patients with severe RLS have WM deficits. Furthermore, negative correlation of P300 amplitudes with the duration of RLS illness suggests that cerebral cortical dysfunction in RLS patients results from repeated RLS symptom attacks. Copyright © 2014 Elsevier B.V. All rights reserved.
Ising formulation of associative memory models and quantum annealing recall
NASA Astrophysics Data System (ADS)
Santra, Siddhartha; Shehab, Omar; Balu, Radhakrishnan
2017-12-01
Associative memory models, in theoretical neuro- and computer sciences, can generally store at most a linear number of memories. Recalling memories in these models can be understood as retrieval of the energy minimizing configuration of classical Ising spins, closest in Hamming distance to an imperfect input memory, where the energy landscape is determined by the set of stored memories. We present an Ising formulation for associative memory models and consider the problem of memory recall using quantum annealing. We show that allowing for input-dependent energy landscapes allows storage of up to an exponential number of memories (in terms of the number of neurons). Further, we show how quantum annealing may naturally be used for recall tasks in such input-dependent energy landscapes, although the recall time may increase with the number of stored memories. Theoretically, we obtain the radius of attractor basins R (N ) and the capacity C (N ) of such a scheme and their tradeoffs. Our calculations establish that for randomly chosen memories the capacity of our model using the Hebbian learning rule as a function of problem size can be expressed as C (N ) =O (eC1N) , C1≥0 , and succeeds on randomly chosen memory sets with a probability of (1 -e-C2N) , C2≥0 with C1+C2=(0.5-f ) 2/(1 -f ) , where f =R (N )/N , 0 ≤f ≤0.5 , is the radius of attraction in terms of the Hamming distance of an input probe from a stored memory as a fraction of the problem size. We demonstrate the application of this scheme on a programmable quantum annealing device, the D-wave processor.
A socio-hydrologic model of coupled water-agriculture dynamics with emphasis on farm size.
NASA Astrophysics Data System (ADS)
Brugger, D. R.; Maneta, M. P.
2015-12-01
Agricultural land cover dynamics in the U.S. are dominated by two trends: 1) total agricultural land is decreasing and 2) average farm size is increasing. These trends have important implications for the future of water resources because 1) growing more food on less land is due in large part to increased groundwater withdrawal and 2) larger farms can better afford both more efficient irrigation and more groundwater access. However, these large-scale trends are due to individual farm operators responding to many factors including climate, economics, and policy. It is therefore difficult to incorporate the trends into watershed-scale hydrologic models. Traditional scenario-based approaches are valuable for many applications, but there is typically no feedback between the hydrologic model and the agricultural dynamics and so limited insight is gained into the how agriculture co-evolves with water resources. We present a socio-hydrologic model that couples simplified hydrologic and agricultural economic dynamics, accounting for many factors that depend on farm size such as irrigation efficiency and returns to scale. We introduce an "economic memory" (EM) state variable that is driven by agricultural revenue and affects whether farms are sold when land market values exceed expected returns from agriculture. The model uses a Generalized Mixture Model of Gaussians to approximate the distribution of farm sizes in a study area, effectively lumping farms into "small," "medium," and "large" groups that have independent parameterizations. We apply the model in a semi-arid watershed in the upper Columbia River Basin, calibrating to data on streamflow, total agricultural land cover, and farm size distribution. The model is used to investigate the sensitivity of the coupled system to various hydrologic and economic scenarios such as increasing market value of land, reduced surface water availability, and increased irrigation efficiency in small farms.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Drift in Neural Population Activity Causes Working Memory to Deteriorate Over Time.
Schneegans, Sebastian; Bays, Paul M
2018-05-23
Short-term memories are thought to be maintained in the form of sustained spiking activity in neural populations. Decreases in recall precision observed with increasing number of memorized items can be accounted for by a limit on total spiking activity, resulting in fewer spikes contributing to the representation of each individual item. Longer retention intervals likewise reduce recall precision, but it is unknown what changes in population activity produce this effect. One possibility is that spiking activity becomes attenuated over time, such that the same mechanism accounts for both effects of set size and retention duration. Alternatively, reduced performance may be caused by drift in the encoded value over time, without a decrease in overall spiking activity. Human participants of either sex performed a variable-delay cued recall task with a saccadic response, providing a precise measure of recall latency. Based on a spike integration model of decision making, if the effects of set size and retention duration are both caused by decreased spiking activity, we would predict a fixed relationship between recall precision and response latency across conditions. In contrast, the drift hypothesis predicts no systematic changes in latency with increasing delays. Our results show both an increase in latency with set size, and a decrease in response precision with longer delays within each set size, but no systematic increase in latency for increasing delay durations. These results were quantitatively reproduced by a model based on a limited neural resource in which working memories drift rather than decay with time. SIGNIFICANCE STATEMENT Rapid deterioration over seconds is a defining feature of short-term memory, but what mechanism drives this degradation of internal representations? Here, we extend a successful population coding model of working memory by introducing possible mechanisms of delay effects. We show that a decay in neural signal over time predicts that the time required for memory retrieval will increase with delay, whereas a random drift in the stored value predicts no effect of delay on retrieval time. Testing these predictions in a multi-item memory task with an eye movement response, we identified drift as a key mechanism of memory decline. These results provide evidence for a dynamic spiking basis for working memory, in contrast to recent proposals of activity-silent storage. Copyright © 2018 Schneegans and Bays.
Skyrmion-skyrmion and skyrmion-edge repulsions in skyrmion-based racetrack memory
NASA Astrophysics Data System (ADS)
Zhang, Xichao; Zhao, G. P.; Fangohr, Hans; Liu, J. Ping; Xia, W. X.; Xia, J.; Morvan, F. J.
2015-01-01
Magnetic skyrmions are promising for building next-generation magnetic memories and spintronic devices due to their stability, small size and the extremely low currents needed to move them. In particular, skyrmion-based racetrack memory is attractive for information technology, where skyrmions are used to store information as data bits instead of traditional domain walls. Here we numerically demonstrate the impacts of skyrmion-skyrmion and skyrmion-edge repulsions on the feasibility of skyrmion-based racetrack memory. The reliable and practicable spacing between consecutive skyrmionic bits on the racetrack as well as the ability to adjust it are investigated. Clogging of skyrmionic bits is found at the end of the racetrack, leading to the reduction of skyrmion size. Further, we demonstrate an effective and simple method to avoid the clogging of skyrmionic bits, which ensures the elimination of skyrmionic bits beyond the reading element. Our results give guidance for the design and development of future skyrmion-based racetrack memory.
Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory.
Bergmann, Johanna; Genç, Erhan; Kohler, Axel; Singer, Wolf; Pearson, Joel
2016-01-01
Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
GenomicTools: a computational platform for developing high-throughput analytics in genomics.
Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo
2012-01-15
Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.
A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less
Still searching for the engram.
Eichenbaum, Howard
2016-09-01
For nearly a century, neurobiologists have searched for the engram-the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories.
Kristian Hill, S; Buchholz, Alison; Amsbaugh, Hayley; Reilly, James L; Rubin, Leah H; Gold, James M; Keefe, Richard S E; Pearlson, Godfrey D; Keshavan, Matcheri S; Tamminga, Carol A; Sweeney, John A
2015-08-01
Working memory impairment is well established in psychotic disorders. However, the relative magnitude, diagnostic specificity, familiality pattern, and degree of independence from generalized cognitive deficits across psychotic disorders remain unclear. Participants from the Bipolar and Schizophrenia Network on Intermediate Phenotypes (B-SNIP) study included probands with schizophrenia (N=289), psychotic bipolar disorder (N=227), schizoaffective disorder (N=165), their first-degree relatives (N=315, N=259, N=193, respectively), and healthy controls (N=289). All were administered the WMS-III Spatial Span working memory test and the Brief Assessment of Cognition in Schizophrenia (BACS) battery. All proband groups displayed significant deficits for both forward and backward span compared to controls. However, after covarying for generalized cognitive impairments (BACS composite), all proband groups showed a 74% or greater effect size reduction with only schizoaffective probands showing residual backward span deficits compared to controls. Significant familiality was seen in schizophrenia and bipolar pedigrees. In relatives, both forward and backward span deficits were again attenuated after covarying BACS scores and residual backward span deficits were seen in relatives of schizophrenia patients. Overall, both probands and relatives showed a similar pattern of robust working memory deficits that were largely attenuated when controlling for generalized cognitive deficits. Copyright © 2015 Elsevier B.V. All rights reserved.
Interfering with free recall of words: Detrimental effects of phonological competition.
Fernandes, Myra A; Wammes, Jeffrey D; Priselac, Sandra; Moscovitch, Morris
2016-09-01
We examined the effect of different distracting tasks, performed concurrently during memory retrieval, on recall of a list of words. By manipulating the type of material and processing (semantic, orthographic, and phonological) required in the distracting task, and comparing the magnitude of memory interference produced, we aimed to infer the kind of representation upon which retrieval of words depends. In Experiment 1, identifying odd digits concurrently during free recall disrupted memory, relative to a full attention condition, when the numbers were presented orthographically (e.g. nineteen), but not numerically (e.g. 19). In Experiment 2, a distracting task that required phonological-based decisions to either word or picture material produced large, but equivalent effects on recall of words. In Experiment 3, phonological-based decisions to pictures in a distracting task disrupted recall more than when the same pictures required semantically-based size estimations. In Experiment 4, a distracting task that required syllable decisions to line drawings interfered significantly with recall, while an equally difficult semantically-based color-decision task about the same line drawings, did not. Together, these experiments demonstrate that the degree of memory interference experienced during recall of words depends primarily on whether the distracting task competes for phonological representations or processes, and less on competition for semantic or orthographic or material-specific representations or processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lim, Yen Ying; Villemagne, Victor L.; Laws, Simon M.; Ames, David; Pietrzak, Robert H.; Ellis, Kathryn A.; Harrington, Karra; Bourgeat, Pierrick; Bush, Ashley I.; Martins, Ralph N.; Masters, Colin L.; Rowe, Christopher C.; Maruff, Paul
2014-01-01
Objective Cross-sectional genetic association studies have reported equivocal results on the relationship between the brain-derived neurotrophic factor (BDNF) Val66Met and risk of Alzheimer’s disease (AD). As AD is a neurodegenerative disease, genetic influences may become clearer from prospective study. We aimed to determine whether BDNF Val66Met polymorphism influences changes in memory performance, hippocampal volume, and Aβ accumulation in adults with amnestic mild cognitive impairment (aMCI) and high Aβ. Methods Thirty-four adults with aMCI were recruited from the Australian, Imaging, Biomarkers and Lifestyle (AIBL) Study. Participants underwent PiB-PET and structural MRI neuroimaging, neuropsychological assessments and BDNF genotyping at baseline, 18 month, and 36 month assessments. Results In individuals with aMCI and high Aβ, Met carriers showed significant and large decline in episodic memory (d = 0.90, p = .020) and hippocampal volume (d = 0.98, p = .035). BDNF Val66Met was unrelated to the rate of Aβ accumulation (d = −0.35, p = .401). Conclusions Although preliminary due to the small sample size, results of this study suggest that high Aβ levels and Met carriage may be useful prognostic markers of accelerated decline in episodic memory, and reductions in hippocampal volume in individuals in the prodromal or MCI stage of AD. PMID:24475133
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.
2006-12-01
In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.
Computational scalability of large size image dissemination
NASA Astrophysics Data System (ADS)
Kooper, Rob; Bajcsy, Peter
2011-01-01
We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.
Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors
NASA Astrophysics Data System (ADS)
Sourouri, Mohammed; Birger Raknes, Espen
2017-04-01
In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.
Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M
2015-10-01
New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
Groh, Claudia; Kelber, Christina; Grübel, Kornelia; Rössler, Wolfgang
2014-01-01
Hymenoptera possess voluminous mushroom bodies (MBs), brain centres associated with sensory integration, learning and memory. The mushroom body input region (calyx) is organized in distinct synaptic complexes (microglomeruli, MG) that can be quantified to analyse body size-related phenotypic plasticity of synaptic microcircuits in these small brains. Leaf-cutting ant workers (Atta vollenweideri) exhibit an enormous size polymorphism, which makes them outstanding to investigate neuronal adaptations underlying division of labour and brain miniaturization. We particularly asked how size-related division of labour in polymorphic workers is reflected in volume and total numbers of MG in olfactory calyx subregions. Whole brains of mini, media and large workers were immunolabelled with anti-synapsin antibodies, and mushroom body volumes as well as densities and absolute numbers of MG were determined by confocal imaging and three-dimensional analyses. The total brain volume and absolute volumes of olfactory mushroom body subdivisions were positively correlated with head widths, but mini workers had significantly larger MB to total brain ratios. Interestingly, the density of olfactory MG was remarkably independent from worker size. Consequently, absolute numbers of olfactory MG still were approximately three times higher in large compared with mini workers. The results show that the maximum packing density of synaptic microcircuits may represent a species-specific limit to brain miniaturization. PMID:24807257
The effects of delay duration on visual working memory for orientation.
Shin, Hongsup; Zou, Qijia; Ma, Wei Ji
2017-12-01
We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.
Multigrid contact detection method
NASA Astrophysics Data System (ADS)
He, Kejing; Dong, Shoubin; Zhou, Zhaoyao
2007-03-01
Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.
Nijdam, Mirjam J; Martens, Irene J M; Reitsma, Johannes B; Gersons, Berthold P R; Olff, Miranda
2018-05-01
Individuals with post-traumatic stress disorder (PTSD) have neurocognitive deficits in verbal memory and executive functioning. In this study, we examined whether memory and executive functioning changed over the course of treatment and which clinical variables were associated with change. Neuropsychological assessments were administered at baseline and endpoint of a randomized controlled trial as secondary outcome. Trauma survivors (n = 88) diagnosed with PTSD received trauma-focused psychotherapy within a 17-week randomized controlled trial. Neuropsychological tests were the California Verbal Learning Test, Rivermead Behavioural Memory Test, Stroop Color Word Test, and Trail Making Test. Significant, small- to medium-sized improvements in verbal memory, information processing speed, and executive functioning were found after trauma-focused psychotherapy (Cohen's d 0.16-0.68). Greater PTSD symptom decrease was significantly related to better post-treatment neurocognitive performance (all p < .005). Patients with comorbid depression improved more than patients with PTSD alone on interference tasks (p < .01). No differences emerged between treatment conditions and between patients on serotonergic antidepressants and those who were not. This study suggests that neurocognitive deficits in PTSD can improve over the course of trauma-focused psychotherapy and are therefore at least partly reversible. Improvements over treatment are in line with previous neuropsychological and neuroimaging studies and effect sizes exceed those of practice effects. Future research should determine whether these changes translate into improved functioning in the daily lives of the patients. Patients with PTSD have difficulties performing verbal memory tasks (e.g., remembering a grocery list, recall of a story) and executive functioning tasks (e.g., shifting attention between two tasks, ignoring irrelevant information to complete a task). Verbal memory, information processing speed, and executive functioning significantly improved in patients with post-traumatic stress disorder over the course of trauma-focused psychotherapy. Improvements were equal in size for two different trauma-focused psychotherapies (Eye movement desensitization and reprocessing therapy and brief eclectic psychotherapy for PTSD). Medium-sized effects were found for recall of a story, whereas effects in other aspects of verbal memory, information processing speed, and executive functioning were small-sized. No causal attributions can be made because we could not include a control group without treatment for ethical reasons. Findings may be more reflective of patients who completed treatment than patients who prematurely dropped out as completers were overrepresented in our sample. © 2018 The British Psychological Society.
Monteiro-Junior, Renato Sobral; da Silva Figueiredo, Luiz Felipe; Maciel-Pinheiro, Paulo de Tarso; Abud, Erick Lohan Rodrigues; Braga, Ana Elisa Mendes Montalvão; Barca, Maria Lage; Engedal, Knut; Nascimento, Osvaldo José M; Deslandes, Andrea Camaz; Laks, Jerson
2017-06-01
Improvements on balance, gait and cognition are some of the benefits of exergames. Few studies have investigated the cognitive effects of exergames in institutionalized older persons. To assess the acute effect of a single session of exergames on cognition of institutionalized older persons. Nineteen institutionalized older persons were randomly allocated to Wii (WG, n = 10, 86 ± 7 year, two males) or control groups (CG, n = 9, 86 ± 5 year, one male). The WG performed six exercises with virtual reality, whereas CG performed six exercises without virtual reality. Verbal fluency test (VFT), digit span forward and digit span backward were used to evaluate semantic memory/executive function, short-term memory and work memory, respectively, before and after exergames and Δ post- to pre-session (absolute) and Δ % (relative) were calculated. Parametric (t independent test) and nonparametric (Mann-Whitney test) statistics and effect size were applied to tests for efficacy. VFT was statistically significant within WG (-3.07, df = 9, p = 0.013). We found no statistically significant differences between the two groups (p > 0.05). Effect size between groups of Δ % (median = 21 %) showed moderate effect for WG (0.63). Our data show moderate improvement of semantic memory/executive function due to exergames session. It is possible that cognitive brain areas are activated during exergames, increasing clinical response. A single session of exergames showed no significant improvement in short-term memory, working memory and semantic memory/executive function. The effect size for verbal fluency was promising, and future studies on this issue should be developed. RBR-6rytw2.
Ihlefeld, Jon F.; Harris, David T.; Keech, Ryan; ...
2016-07-05
Ferroelectric materials are well-suited for a variety of applications because they can offer a combination of high performance and scaled integration. Examples of note include piezoelectrics to transform between electrical and mechanical energies, capacitors used to store charge, electro-optic devices, and non-volatile memory storage. Accordingly, they are widely used as sensors, actuators, energy storage, and memory components, ultrasonic devices, and in consumer electronics products. Because these functional properties arise from a non-centrosymmetric crystal structure with spontaneous strain and a permanent electric dipole, the properties depend upon physical and electrical boundary conditions, and consequently, physical dimension. The change of properties withmore » decreasing physical dimension is commonly referred to as a size effect. In thin films, size effects are widely observed, while in bulk ceramics, changes in properties from the values of large-grained specimens is most notable in samples with grain sizes below several microns. It is important to note that ferroelectricity typically persists to length scales of about 10 nm, but below this point is often absent. Despite the stability of ferroelectricity for dimensions greater than ~10 nm, the dielectric and piezoelectric coefficients of scaled ferroelectrics are suppressed relative to their bulk counterparts, in some cases by changes up to 80%. The loss of extrinsic contributions (domain and phase boundary motion) to the electromechanical response accounts for much of this suppression. In this article the current understanding of the underlying mechanisms for this behavior in perovskite ferroelectrics are reviewed. We focus on the intrinsic limits of ferroelectric response, the roles of electrical and mechanical boundary conditions, grain size and thickness effects, and extraneous effects related to processing. Ultimately, in many cases, multiple mechanisms combine to produce the observed scaling effects.« less
The relationship between baseline pupil size and intelligence.
Tsukahara, Jason S; Harrison, Tyler L; Engle, Randall W
2016-12-01
Pupil dilations of the eye are known to correspond to central cognitive processes. However, the relationship between pupil size and individual differences in cognitive ability is not as well studied. A peculiar finding that has cropped up in this research is that those high on cognitive ability have a larger pupil size, even during a passive baseline condition. Yet these findings were incidental and lacked a clear explanation. Therefore, in the present series of studies we systematically investigated whether pupil size during a passive baseline is associated with individual differences in working memory capacity and fluid intelligence. Across three studies we consistently found that baseline pupil size is, in fact, related to cognitive ability. We showed that this relationship could not be explained by differences in mental effort, and that the effect of working memory capacity and fluid intelligence on pupil size persisted even after 23 sessions and taking into account the effect of novelty or familiarity with the environment. We also accounted for potential confounding variables such as; age, ethnicity, and drug substances. Lastly, we found that it is fluid intelligence, more so than working memory capacity, which is related to baseline pupil size. In order to provide an explanation and suggestions for future research, we also consider our findings in the context of the underlying neural mechanisms involved. Copyright © 2016 Elsevier Inc. All rights reserved.
Payne, Brennan R.; Gross, Alden L.; Hill, Patrick L.; Parisi, Jeanine M.; Rebok, George W.; Stine-Morrow, Elizabeth A. L.
2018-01-01
With advancing age, episodic memory performance shows marked declines along with concurrent reports of lower subjective memory beliefs. Given that normative age-related declines in episodic memory co-occur with declines in other cognitive domains, we examined the relationship between memory beliefs and multiple domains of cognitive functioning. Confirmatory bi-factor structural equation models were used to parse the shared and independent variance among factors representing episodic memory, psychomotor speed, and executive reasoning in one large cohort study (Senior Odyssey, N = 462), and replicated using another large cohort of healthy older adults (ACTIVE, N = 2,802). Accounting for a general fluid cognitive functioning factor (comprised of the shared variance among measures of episodic memory, speed, and reasoning) attenuated the relationship between objective memory performance and subjective memory beliefs in both samples. Moreover, the general cognitive functioning factor was the strongest predictor of memory beliefs in both samples. These findings are consistent with the notion that dispositional memory beliefs may reflect perceptions of cognition more broadly. This may be one reason why memory beliefs have broad predictive validity for interventions that target fluid cognitive ability. PMID:27685541
Payne, Brennan R; Gross, Alden L; Hill, Patrick L; Parisi, Jeanine M; Rebok, George W; Stine-Morrow, Elizabeth A L
2017-07-01
With advancing age, episodic memory performance shows marked declines along with concurrent reports of lower subjective memory beliefs. Given that normative age-related declines in episodic memory co-occur with declines in other cognitive domains, we examined the relationship between memory beliefs and multiple domains of cognitive functioning. Confirmatory bi-factor structural equation models were used to parse the shared and independent variance among factors representing episodic memory, psychomotor speed, and executive reasoning in one large cohort study (Senior Odyssey, N = 462), and replicated using another large cohort of healthy older adults (ACTIVE, N = 2802). Accounting for a general fluid cognitive functioning factor (comprised of the shared variance among measures of episodic memory, speed, and reasoning) attenuated the relationship between objective memory performance and subjective memory beliefs in both samples. Moreover, the general cognitive functioning factor was the strongest predictor of memory beliefs in both samples. These findings are consistent with the notion that dispositional memory beliefs may reflect perceptions of cognition more broadly. This may be one reason why memory beliefs have broad predictive validity for interventions that target fluid cognitive ability.
The cortisol awakening response and memory performance in older men and women.
Almela, Mercedes; van der Meij, Leander; Hidalgo, Vanesa; Villada, Carolina; Salvador, Alicia
2012-12-01
The activity and regulation of the hypothalamus-pituitary-adrenal axis has been related to cognitive decline during aging. This study investigated whether the cortisol awakening response (CAR) is related to memory performance among older adults. The sample was composed of 88 participants (44 men and 44 women) from 55 to 77 years old. The memory assessment consisted of two tests measuring declarative memory (a paragraph recall test and a word list learning test) and two tests measuring working memory (a spatial span test and a spatial working memory test). Among those participants who showed the CAR on two consecutive days, we found that a greater CAR was related to poorer declarative memory performance in both men and women, and to better working memory performance only in men. The results of our study suggest that the relationship between CAR and memory performance is negative in men and women when memory performance is largely dependent on hippocampal functioning (i.e. declarative memory), and positive, but only in men, when memory performance is largely dependent on prefrontal cortex functioning (i.e. working memory). Copyright © 2012 Elsevier Ltd. All rights reserved.
Toward Enhancing OpenMP's Work-Sharing Directives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, B M; Huang, L; Jin, H
2006-05-17
OpenMP provides a portable programming interface for shared memory parallel computers (SMPs). Although this interface has proven successful for small SMPs, it requires greater flexibility in light of the steadily growing size of individual SMPs and the recent advent of multithreaded chips. In this paper, we describe two application development experiences that exposed these expressivity problems in the current OpenMP specification. We then propose mechanisms to overcome these limitations, including thread subteams and thread topologies. Thus, we identify language features that improve OpenMP application performance on emerging and large-scale platforms while preserving ease of programming.
Deterministic Generation of All-Photonic Quantum Repeaters from Solid-State Emitters
NASA Astrophysics Data System (ADS)
Buterakos, Donovan; Barnes, Edwin; Economou, Sophia E.
2017-10-01
Quantum repeaters are nodes in a quantum communication network that allow reliable transmission of entanglement over large distances. It was recently shown that highly entangled photons in so-called graph states can be used for all-photonic quantum repeaters, which require substantially fewer resources compared to atomic-memory-based repeaters. However, standard approaches to building multiphoton entangled states through pairwise probabilistic entanglement generation severely limit the size of the state that can be created. Here, we present a protocol for the deterministic generation of large photonic repeater states using quantum emitters such as semiconductor quantum dots and defect centers in solids. We show that arbitrarily large repeater states can be generated using only one emitter coupled to a single qubit, potentially reducing the necessary number of photon sources by many orders of magnitude. Our protocol includes a built-in redundancy, which makes it resilient to photon loss.
Sparse distributed memory prototype: Principles of operation
NASA Technical Reports Server (NTRS)
Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip
1988-01-01
Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.
Speech Perception and Short Term Memory Deficits in Persistent Developmental Speech Disorder
Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.
2008-01-01
Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech perception and short-term memory. Nine adults with a persistent familial developmental speech disorder without language impairment were compared with 20 controls on tasks requiring the discrimination of fine acoustic cues for word identification and on measures of verbal and nonverbal short-term memory. Significant group differences were found in the slopes of the discrimination curves for first formant transitions for word identification with stop gaps of 40 and 20 ms with effect sizes of 1.60 and 1.56. Significant group differences also occurred on tests of nonverbal rhythm and tonal memory, and verbal short-term memory with effect sizes of 2.38, 1.56 and 1.73. No group differences occurred in the use of stop gap durations for word identification. Because frequency-based speech perception and short-term verbal and nonverbal memory deficits both persisted into adulthood in the speech-impaired adults, these deficits may be involved in the persistence of speech disorders without language impairment. PMID:15896836
How Does Knowledge Promote Memory? The Distinctiveness Theory of Skilled Memory
ERIC Educational Resources Information Center
Rawson, Katherine A.; Van Overschelde, James P.
2008-01-01
The robust effects of knowledge on memory for domain-relevant information reported in previous research have largely been attributed to improved organizational processing. The present research proposes the distinctiveness theory of skilled memory, which states that knowledge improves memory not only through improved organizational processing but…
[Subjective memory complaints, personality and prefrontal symptomatology in young adults].
Pedrero-Pérez, Eduardo J; Ruiz-Sánchez de León, José M
2013-10-01
This work explores two issues related with the appearance of subjective memory complaints in young adults: on the one hand, the possibility of the complaints being a result of attentional and executive deficits and, on the other, whether certain characteristics of the personality favour and modulate the clinical expression of these complaints. The Memory Failures of Everyday questionnaire, Spanish version, the Prefrontal Syndromes Inventory and the Revised Temperament and Character Inventory were administered to a sample of 1132 participants (900 from the general population and 232 on treatment for drug addiction). The correlation among the variables of the memory complaints, of prefrontal functioning in daily life and of the dimensions of personality proposed by Cloninger was explored. The causal relationships among the variables were studied using structural methods. A strong correlation was observed between cognitive complaints and prefrontal symptoms, suggesting that the complaints are, in fact, a result of an inadequate management of the attentional and executive functions that favours daily errors. A relationship with a large effect size is also observed between the cognitive complaints and low self-management. This dimension of the personality offers an important predictive capacity regarding the appearance and the intensity of the complaints, either directly or modulated by other dimensions, especially harm avoidance. The data back the idea that memory complaints are the result of the self-perception of daily faults and errors that are produced at the attentional and executive level -although they are taken as instances of mnemonic oversight- and that the clinical expression of these complaints is modulated by a profile of the personality.
Set shifting and working memory in adults with attention-deficit/hyperactivity disorder.
Rohlf, Helena; Jucksch, Viola; Gawrilow, Caterina; Huss, Michael; Hein, Jakob; Lehmkuhl, Ulrike; Salbach-Andrae, Harriet
2012-01-01
Compared to the high number of studies that investigated executive functions (EF) in children with attention-deficit/hyperactivity disorder (ADHD), a little is known about the EF performance of adults with ADHD. This study compared 37 adults with ADHD (ADHD(total)) and 32 control participants who were equivalent in age, intelligence quotient (IQ), sex, and years of education, in two domains of EF--set shifting and working memory. Additionally, the ADHD(total) group was subdivided into two subgroups: ADHD patients without comorbidity (ADHD(-), n = 19) and patients with at least one comorbid disorder (ADHD(+), n = 18). Participants fulfilled two measures for set shifting (i.e., the trail making test, TMT and a computerized card sorting test, CKV) and one measure for working memory (i.e., digit span test, DS). Compared to the control group the ADHD(total) group displayed deficits in set shifting and working memory. The differences between the groups were of medium-to-large effect size (TMT: d = 0.48; DS: d = 0.51; CKV: d = 0.74). The subgroup comparison of the ADHD(+) group and the ADHD(-) group revealed a poorer performance in general information processing speed for the ADHD(+) group. With regard to set shifting and working memory, no significant differences could be found between the two subgroups. These results suggest that the deficits of the ADHD(total) group are attributable to ADHD rather than to comorbidity. An influence of comorbidity, however, could not be completely ruled out as there was a trend of a poorer performance in the ADHD(+) group on some of the outcome measures.
Still searching for the engram
Eichenbaum, Howard
2016-01-01
For nearly a century neurobiologists have searched for the engram - the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories. PMID:26944423
Evidence for age-associated cognitive decline from Internet game scores.
Geyer, Jason; Insel, Philip; Farzin, Faraz; Sternberg, Daniel; Hardy, Joseph L; Scanlon, Michael; Mungas, Dan; Kramer, Joel; Mackin, R Scott; Weiner, Michael W
2015-06-01
Lumosity's Memory Match (LMM) is an online game requiring visual working memory. Change in LMM scores may be associated with individual differences in age-related changes in working memory. Effects of age and time on LMM learning and forgetting rates were estimated using data from 1890 game sessions for users aged 40 to 79 years. There were significant effects of age on baseline LMM scores (β = -.31, standard error or SE = .02, P < .0001) and lower learning rates (β = -.0066, SE = .0008, P < .0001). A sample size of 202 subjects/arm was estimated for a 1-year study for subjects in the lower quartile of game performance. Online memory games have the potential to identify age-related decline in cognition and to identify subjects at risk for cognitive decline with smaller sample sizes and lower cost than traditional recruitment methods.
Nonlinear machine learning and design of reconfigurable digital colloids.
Long, Andrew W; Phillips, Carolyn L; Jankowksi, Eric; Ferguson, Andrew L
2016-09-14
Digital colloids, a cluster of freely rotating "halo" particles tethered to the surface of a central particle, were recently proposed as ultra-high density memory elements for information storage. Rational design of these digital colloids for memory storage applications requires a quantitative understanding of the thermodynamic and kinetic stability of the configurational states within which information is stored. We apply nonlinear machine learning to Brownian dynamics simulations of these digital colloids to extract the low-dimensional intrinsic manifold governing digital colloid morphology, thermodynamics, and kinetics. By modulating the relative size ratio between halo particles and central particles, we investigate the size-dependent configurational stability and transition kinetics for the 2-state tetrahedral (N = 4) and 30-state octahedral (N = 6) digital colloids. We demonstrate the use of this framework to guide the rational design of a memory storage element to hold a block of text that trades off the competing design criteria of memory addressability and volatility.
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan
2013-06-07
Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impairing existing declarative memory in humans by disrupting reconsolidation
Chan, Jason C. K.; LaPaglia, Jessica A.
2013-01-01
During the past decade, a large body of research has shown that memory traces can become labile upon retrieval and must be restabilized. Critically, interrupting this reconsolidation process can abolish a previously stable memory. Although a large number of studies have demonstrated this reconsolidation associated amnesia in nonhuman animals, the evidence for its occurrence in humans is far less compelling, especially with regard to declarative memory. In fact, reactivating a declarative memory often makes it more robust and less susceptible to subsequent disruptions. Here we show that existing declarative memories can be selectively impaired by using a noninvasive retrieval–relearning technique. In six experiments, we show that this reconsolidation-associated amnesia can be achieved 48 h after formation of the original memory, but only if relearning occurred soon after retrieval. Furthermore, the amnesic effect persists for at least 24 h, cannot be attributed solely to source confusion and is attainable only when relearning targets specific existing memories for impairment. These results demonstrate that human declarative memory can be selectively rewritten during reconsolidation. PMID:23690586
McMorris, Terry; Sproule, John; Turner, Anthony; Hale, Beverley J
2011-03-01
The purpose of this study was to compare, using meta-analytic techniques, the effect of acute, intermediate intensity exercise on the speed and accuracy of performance of working memory tasks. It was hypothesized that acute, intermediate intensity exercise would have a significant beneficial effect on response time and that effect sizes for response time and accuracy data would differ significantly. Random-effects meta-analysis showed a significant, beneficial effect size for response time, g=-1.41 (p<0.001) but a significant detrimental effect size, g=0.40 (p<0.01), for accuracy. There was a significant difference between effect sizes (Z(diff)=3.85, p<0.001). It was concluded that acute, intermediate intensity exercise has a strong beneficial effect on speed of response in working memory tasks but a low to moderate, detrimental one on accuracy. There was no support for a speed-accuracy trade-off. It was argued that exercise-induced increases in brain concentrations of catecholamines result in faster processing but increases in neural noise may negatively affect accuracy. 2010 Elsevier Inc. All rights reserved.
Is Ginkgo biloba a cognitive enhancer in healthy individuals? A meta-analysis.
Laws, Keith R; Sweetnam, Hilary; Kondel, Tejinder K
2012-11-01
We conducted a meta-analysis to examine whether Ginkgo biloba (G. biloba) enhances cognitive function in healthy individuals. Scopus, Medline, Google Scholar databases and recent qualitative reviews were searched for studies examining the effects of G. biloba on cognitive function in healthy individuals. We identified randomised controlled trials containing data on memory (K = 13), executive function (K = 7) and attention (K = 8) from which effect sizes could be derived. The analyses provided measures of memory, executive function and attention in 1132, 534 and 910 participants, respectively. Effect sizes were non-significant and close to zero for memory (d = -0.04: 95%CI -0.17 to 0.07), executive function (d = -0.05: 95%CI -0.17 to 0.05) and attention (d = -0.08: 95%CI -0.21 to 0.02). Meta-regressions showed that effect sizes were not related to participant age, duration of the trial, daily dose, total dose or sample size. We report that G. biloba had no ascertainable positive effects on a range of targeted cognitive functions in healthy individuals. Copyright © 2012 John Wiley & Sons, Ltd.
Effects of motor congruence on visual working memory.
Quak, Michel; Pecher, Diane; Zeelenberg, Rene
2014-10-01
Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.
NASA Astrophysics Data System (ADS)
Kino, Hisashi; Fukushima, Takafumi; Tanaka, Tetsu
2018-04-01
Charge-trapping memory requires the increase of bit density per cell and a larger memory window for lower-power operation. A tunnel field-effect transistor (TFET) can achieve to increase the bit density per cell owing to its steep subthreshold slope. In addition, a TFET structure has an asymmetric structure, which is promising for achieving a larger memory window. A TFET with the N-type gate shows a higher electric field between the P-type source and the N-type gate edge than the conventional FET structure. This high electric field enables large amounts of charges to be injected into the charge storage layer. In this study, we fabricated silicon-oxide-nitride-oxide-semiconductor (SONOS) memory devices with the TFET structure and observed a steep subthreshold slope and a larger memory window.
Exploring the Effect of Sleep and Reduced Interference on Different Forms of Declarative Memory
Schönauer, Monika; Pawlizki, Annedore; Köck, Corinna; Gais, Steffen
2014-01-01
Study Objectives: Many studies have found that sleep benefits declarative memory consolidation. However, fundamental questions on the specifics of this effect remain topics of discussion. It is not clear which forms of memory are affected by sleep and whether this beneficial effect is partly mediated by passive protection against interference. Moreover, a putative correlation between the structure of sleep and its memory-enhancing effects is still being discussed. Design: In three experiments, we tested whether sleep differentially affects various forms of declarative memory. We varied verbal content (verbal/nonverbal), item type (single/associate), and recall mode (recall/recognition, cued/free recall) to examine the effect of sleep on specific memory subtypes. We compared within-subject differences in memory consolidation between intervals including sleep, active wakefulness, or quiet meditation, which reduced external as well as internal interference and rehearsal. Participants: Forty healthy adults aged 18–30 y, and 17 healthy adults aged 24–55 y with extensive meditation experience participated in the experiments. Results: All types of memory were enhanced by sleep if the sample size provided sufficient statistical power. Smaller sample sizes showed an effect of sleep if a combined measure of different declarative memory scales was used. In a condition with reduced external and internal interference, performance was equal to one with high interference. Here, memory consolidation was significantly lower than in a sleep condition. We found no correlation between sleep structure and memory consolidation. Conclusions: Sleep does not preferentially consolidate a specific kind of declarative memory, but consistently promotes overall declarative memory formation. This effect is not mediated by reduced interference. Citation: Schönauer M, Pawlizki A, Köck C, Gais S. Exploring the effect of sleep and reduced interference on different forms of declarative memory. SLEEP 2014;37(12):1995-2007. PMID:25325490
NASA Astrophysics Data System (ADS)
Murguia, Silvia Briseño; Clauser, Arielle; Dunn, Heather; Fisher, Wendy; Snir, Yoav; Brennan, Raymond E.; Young, Marcus L.
2018-04-01
Shape memory alloys (SMAs) are of high interest as active, adaptive "smart" materials for applications such as sensors and actuators due to their unique properties, including the shape memory effect and pseudoelasticity. Binary NiTi SMAs have shown the most desirable properties, and consequently have generated the most commercial success. A major challenge for SMAs, in particular, is their well-known compositional sensitivity. Therefore, it is critical to control the powder composition and morphology. In this study, a low-pressure, low-temperature hydriding-pulverization-dehydriding method for preparing well-controlled compositions, size, and size distributions of SMA powders from wires is presented. Starting with three different diameters of as-drawn martensitic NiTi SMA wires, pre-alloyed NiTi powders of various well-controlled sizes are produced by hydrogen charging the wires in a heated H3PO4 solution. After hydrogen charging for different charging times, the wires are pulverized and subsequently dehydrided. The wires and the resulting powders are characterized using scanning electron microscopy, differential scanning calorimetry, and X-ray diffraction. The relationship between the wire diameter and powder size is investigated as a function of hydrogen charging time. The rate of diameter reduction after hydrogen charging of wire is also examined. Finally, the recovery behavior due to the shape memory effect is investigated after dehydriding.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu
2006-04-17
TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less
Virtual reality measures in neuropsychological assessment: a meta-analytic review.
Neguț, Alexandra; Matu, Silviu-Andrei; Sava, Florin Alin; David, Daniel
2016-02-01
Virtual reality-based assessment is a new paradigm for neuropsychological evaluation, that might provide an ecological assessment, compared to paper-and-pencil or computerized neuropsychological assessment. Previous research has focused on the use of virtual reality in neuropsychological assessment, but no meta-analysis focused on the sensitivity of virtual reality-based measures of cognitive processes in measuring cognitive processes in various populations. We found eighteen studies that compared the cognitive performance between clinical and healthy controls on virtual reality measures. Based on a random effects model, the results indicated a large effect size in favor of healthy controls (g = .95). For executive functions, memory and visuospatial analysis, subgroup analysis revealed moderate to large effect sizes, with superior performance in the case of healthy controls. Participants' mean age, type of clinical condition, type of exploration within virtual reality environments, and the presence of distractors were significant moderators. Our findings support the sensitivity of virtual reality-based measures in detecting cognitive impairment. They highlight the possibility of using virtual reality measures for neuropsychological assessment in research applications, as well as in clinical practice.
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Blackmore, Lars; Wolf, Michael; Fathpour, Nanaz; Newman, Claire; Elfes, Alberto
2009-01-01
Hot air (Montgolfiere) balloons represent a promising vehicle system for possible future exploration of planets and moons with thick atmospheres such as Venus and Titan. To go to a desired location, this vehicle can primarily use the horizontal wind that varies with altitude, with a small help of its own actuation. A main challenge is how to plan such trajectory in a highly nonlinear and time-varying wind field. This paper poses this trajectory planning as a graph search on the space-time grid and addresses its computational aspects. When capturing various time scales involved in the wind field over the duration of long exploration mission, the size of the graph becomes excessively large. We show that the adjacency matrix of the graph is block-triangular, and by exploiting this structure, we decompose the large planning problem into several smaller subproblems, whose memory requirement stays almost constant as the problem size grows. The approach is demonstrated on a global reachability analysis of a possible Titan mission scenario.
Working memory for visual features and conjunctions in schizophrenia.
Gold, James M; Wilk, Christopher M; McMahon, Robert P; Buchanan, Robert W; Luck, Steven J
2003-02-01
The visual working memory (WM) storage capacity of patients with schizophrenia was investigated using a change detection paradigm. Participants were presented with 2, 3, 4, or 6 colored bars with testing of both single feature (color, orientation) and feature conjunction conditions. Patients performed significantly worse than controls at all set sizes but demonstrated normal feature binding. Unlike controls, patient WM capacity declined at set size 6 relative to set size 4. Impairments with subcapacity arrays suggest a deficit in task set maintenance: Greater impairment for supercapacity set sizes suggests a deficit in the ability to selectively encode information for WM storage. Thus, the WM impairment in schizophrenia appears to be a consequence of attentional deficits rather than a reduction in storage capacity.
Dot size effects of nanocrystalline germanium on charging dynamics of memory devices
2013-01-01
The dot size of nanocrystalline germanium (NC Ge) which impacts on the charging dynamics of memory devices has been theoretically investigated. The calculations demonstrate that the charge stored in the NC Ge layer and the charging current at a given oxide voltage depend on the dot size especially on a few nanometers. They have also been found to obey the tendency of initial increase, then saturation, and lastly, decrease with increasing dot size at any given charging time, which is caused by a compromise between the effects of the lowest conduction states and the capacitance of NC Ge layer on the tunneling. The experimental data from literature have also been used to compare and validate the theoretical analysis. PMID:23305228
Searching for the right word: Hybrid visual and memory search for words
Boettcher, Sage E. P.; Wolfe, Jeremy M.
2016-01-01
In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035
No Evidence for an Item Limit in Change Detection (Open Access)
2013-02-28
memory : a reconsideration of mental storage capacity. Behav Brain Sci 24: 87–114. 17. Eng HY, Chen D, Jiang Y (2005) Visual working memory for simple...working memory can hold no more than a fixed number of items (‘‘item-limit models’’). Recent findings force us to consider the alternative view that working... memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size (‘‘continuous-resource models
Out-of-Core Streamline Visualization on Large Unstructured Meshes
NASA Technical Reports Server (NTRS)
Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu
1997-01-01
It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.
Decoding memory features from hippocampal spiking activities using sparse classification models.
Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W
2016-08-01
To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.
Shared versus distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.
A new variable-resolution associative memory for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annovi, A.; Amerio, S.; Beretta, M.
2011-07-01
We describe an important advancement for the Associative Memory device (AM). The AM is a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture. The AM is optimized for on-line track finding in high-energy physics experiments. Pattern matching is carried out by finding track candidates in coarse resolution 'roads'. A large AM bank stores all trajectories of interest, called 'patterns', for a given detector resolution. The AM extracts roads compatible with a given event during detector read-out. Two important variables characterize the quality of the AM bank: its 'coverage' and the level of fake roads. The coverage,more » which describes the geometric efficiency of a bank, is defined as the fraction of tracks that match at least one pattern in the bank. Given a certain road size, the coverage of the bank can be increased just adding patterns to the bank, while the number of fakes unfortunately is roughly proportional to the number of patterns in the bank. Moreover, as the luminosity increases, the fake rate increases rapidly because of the increased silicon occupancy. To counter that, we must reduce the width of our roads. If we decrease the road width using the current technology, the system will become very large and extremely expensive. We propose an elegant solution to this problem: the 'variable resolution patterns'. Each pattern and each detector layer within a pattern will be able to use the optimal width, but we will use a 'don't care' feature (inspired from ternary CAMs) to increase the width when that is more appropriate. In other words we can use patterns of variable shape. As a result we reduce the number of fake roads, while keeping the efficiency high and avoiding excessive bank size due to the reduced width. We describe the idea, the implementation in the new AM design and the implementation of the algorithm in the simulation. Finally we show the effectiveness of the 'variable resolution patterns' idea using simulated high occupancy events in the ATLAS detector. (authors)« less
SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Z; Shi, F; Jia, X
Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires accessmore » to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.« less
Memory for pure tone sequences without contour.
Lefebvre, Christine; Jolicœur, Pierre
2016-06-01
We presented pure tones interspersed with white noise sounds to disrupt contour perception in an acoustic short-term memory (ASTM) experiment during which we recorded the electroencephalogram. The memory set consisted of seven stimuli, 0, 1, 2, 3, or 4 of which were to-be-remembered tones. We estimated each participant׳s capacity, K, for each set size and measured the amplitude of the SAN (sustained anterior negativity, an ERP related to acoustic short-term memory). We correlated their K slopes with their SAN amplitude slopes as a function of set size, and found a significant link between performance and the SAN: a larger increase in SAN amplitude was linked with a larger number of stimuli maintained in ASTM. The SAN decreased in amplitude in the later portion of the silent retention interval, but the correlation between the SAN and capacity remained strong. These results show the SAN is not an index of contour but rather an index of the maintenance of individual objects in STM. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.
Korrelboom, Kees; Marissen, Marlies; van Assendelft, Tanja
2011-01-01
Self-esteem is a major concern in the treatment of patients with personality disorders in general. In patients with borderline personality disorder, low self-esteem is associated with factors contributing to suicidal and self-injurious behaviour. At the moment there are no well-proven interventions that specifically target low self-esteem. Recently, a new approach, Competitive Memory Training or COMET, aimed at the enhancement of retrieving beneficial information from memory, appeared to be successful in addressing low self-esteem in different patient populations. To assess whether COMET for low self-esteem is also an effective intervention for patients with personality disorders. 91 patients with personality disorders who were already in therapy in a regular mental health institution were randomly assigned to either 7 group sessions of COMET in addition to their regular therapy or to 7 weeks of ongoing regular therapy. These latter patients received COMET after their “7 weeks waiting period for COMET”. All patients that completed COMET were contacted 3 months later to assess whether the effects of COMET had remained stable. Compared to the patients who received regular therapy only, patients in the COMET + regular therapy condition improved significantly and with large effect sizes on indices of self-esteem and depression. Significant differential improvements on measures of autonomy and social optimism were also in favour of COMET, but had small to intermediate effect sizes. The therapeutic effects of COMET remained stable after 3 months on three out of the four outcome measures. COMET for low self-esteem seems to be an efficacious trans-diagnostic approach that can rather easily be implemented in the treatment of patients with personality disorders.
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs
NASA Astrophysics Data System (ADS)
Mawson, Mark J.; Revell, Alistair J.
2014-10-01
The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well suited to an efficient implementation for massively parallel computing, due to the prevalence of local operations in the algorithm. This paper presents and analyses the performance of a 3D lattice Boltzmann solver, optimized for third generation nVidia GPU hardware, also known as 'Kepler'. We provide a review of previous optimization strategies and analyse data read/write times for different memory types. In LBM, the time propagation step (known as streaming), involves shifting data to adjacent locations and is central to parallel performance; here we examine three approaches which make use of different hardware options. Two of which make use of 'performance enhancing' features of the GPU; shared memory and the new shuffle instruction found in Kepler based GPUs. These are compared to a standard transfer of data which relies instead on optimized storage to increase coalesced access. It is shown that the more simple approach is most efficient; since the need for large numbers of registers per thread in LBM limits the block size and thus the efficiency of these special features is reduced. Detailed results are obtained for a D3Q19 LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter case the use of a read-only data cache is explored, and peak performance of over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The appearance of a periodic bottleneck in the solver performance is also reported, believed to be hardware related; spikes in iteration-time occur with a frequency of around 11 Hz for both GPUs, independent of the size of the problem.
Neuropsychological Profiles on the WAIS-IV of Adults With ADHD.
Theiling, Johanna; Petermann, Franz
2016-11-01
The aim of the study was to investigate the pattern of neuropsychological profiles on the Wechsler Adult Intelligence Scale-IV (WAIS-IV) for adults With ADHD relative to randomly matched controls and to assess overall intellectual ability discrepancies of the Full Scale Intelligence Quotient (FSIQ) and the General Ability Index (GAI). In all, 116 adults With ADHD and 116 controls between 16 and 71 years were assessed. Relative to controls, adults With ADHD show significant decrements in subtests with working memory and processing speed demands with moderate to large effect sizes and a higher GAI in comparison with the FSIQ. This suggests first that deficits identified with previous WAIS versions are robust in adults With ADHD and remain deficient when assessed with the WAIS-IV; second that the WAIS-IV reliably differentiates between patients and controls; and third that a reduction of the FSIQ is most likely due to a decrement in working memory and processing speed abilities. The findings have essential implications for the diagnostic process. © The Author(s) 2014.
Mental simulation of routes during navigation involves adaptive temporal compression
Arnold, Aiden E.G.F.; Iaria, Giuseppe; Ekstrom, Arne D.
2016-01-01
Mental simulation is a hallmark feature of human cognition, allowing features from memories to be flexibly used during prospection. While past studies demonstrate the preservation of real-world features such as size and distance during mental simulation, their temporal dynamics remains unknown. Here, we compare mental simulations to navigation of routes in a large-scale spatial environment to test the hypothesis that such simulations are temporally compressed in an adaptive manner. Our results show that simulations occurred at 2.39x the speed it took to navigate a route, increasing in compression (3.57x) for slower movement speeds. Participant self-reports of vividness and spatial coherence of simulations also correlated strongly with simulation duration, providing an important link between subjective experiences of simulated events and how spatial representations are combined during prospection. These findings suggest that simulation of spatial events involve adaptive temporal mechanisms, mediated partly by the fidelity of memories used to generate the simulation. PMID:27568586
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
NASA Astrophysics Data System (ADS)
Denisov, O. V.; Buligin, Y. I.; Ponomarev, A. E.; Ponomareva, I. A.; Lebedeva, V. V.
2017-01-01
An important direction in the development of the shockproof devices for occupations associated with an increased risk of injury is reducing their overall size with the preservation the ability of energy absorption. The fixture protection of large joints, with the brace in the coils of an elastic-plastic material with shape memory effect, can effectively protect people from injury and can be used in the domain of occupational safety to reduce injuries by shocks or jolts. In innovative anti-shock device as elastic-plastic material applied equiatomic Titanium-Nickel alloy which has acceptable temperature phase transitions that is necessary to restore shape. As an experienced model first approximation was adopted shockproof device, having in its composition a bandage in coils of elastic-plastic material with shape memory effect and with electric contacts at the ends. This solution allows the punches to plastically deform with the absorption of the impact energy, and then recover the original shape, including at the expense of electric heating.
MoSbTe for high-speed and high-thermal-stability phase-change memory applications
NASA Astrophysics Data System (ADS)
Liu, Wanliang; Wu, Liangcai; Li, Tao; Song, Zhitang; Shi, Jianjun; Zhang, Jing; Feng, Songlin
2018-04-01
Mo-doped Sb1.8Te materials and electrical devices were investigated for high-thermal-stability and high-speed phase-change memory applications. The crystallization temperature (t c = 185 °C) and 10-year data retention (t 10-year = 112 °C) were greatly enhanced compared with those of Ge2Sb2Te5 (t c = 150 °C, t 10-year = 85 °C) and pure Sb1.8Te (t c = 166 °C, t 10-year = 74 °C). X-ray diffraction and transmission electron microscopy results show that the Mo dopant suppresses crystallization, reducing the crystalline grain size. Mo2.0(Sb1.8Te)98.0-based devices were fabricated to evaluate the reversible phase transition properties. SET/RESET with a large operation window can be realized using a 10 ns pulse, which is considerably better than that required for Ge2Sb2Te5 (∼50 ns). Furthermore, ∼1 × 106 switching cycles were achieved.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
Pakhomov, Serguei V S; Eberly, Lynn; Knopman, David
2016-08-01
A computational approach for estimating several indices of performance on the animal category verbal fluency task was validated, and examined in a large longitudinal study of aging. The performance indices included the traditional verbal fluency score, size of semantic clusters, density of repeated words, as well as measures of semantic and lexical diversity. Change over time in these measures was modeled using mixed effects regression in several groups of participants, including those that remained cognitively normal throughout the study (CN) and those that were diagnosed with mild cognitive impairment (MCI) or Alzheimer's disease (AD) dementia at some point subsequent to the baseline visit. The results of the study show that, with the exception of mean cluster size, the indices showed significantly greater declines in the MCI and AD dementia groups as compared to CN participants. Examination of associations between the indices and cognitive domains of memory, attention and visuospatial functioning showed that the traditional verbal fluency scores were associated with declines in all three domains, whereas semantic and lexical diversity measures were associated with declines only in the visuospatial domain. Baseline repetition density was associated with declines in memory and visuospatial domains. Examination of lexical and semantic diversity measures in subgroups with high vs. low attention scores (but normal functioning in other domains) showed that the performance of individuals with low attention was influenced more by word frequency rather than strength of semantic relatedness between words. These findings suggest that various automatically semantic indices may be used to examine various aspects of cognitive performance affected by dementia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Overview of Non-Volatile Testing and Screening Methods
NASA Technical Reports Server (NTRS)
Irom, Farokh
2001-01-01
Testing methods for memories and non-volatile memories have become increasingly sophisticated as they become denser and more complex. High frequency and faster rewrite times as well as smaller feature sizes have led to many testing challenges. This paper outlines several testing issues posed by novel memories and approaches to testing for radiation and reliability effects. We discuss methods for measurements of Total Ionizing Dose (TID).
BCH codes for large IC random-access memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1983-01-01
In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.
Sparse distributed memory: Principles and operation
NASA Technical Reports Server (NTRS)
Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.
1989-01-01
Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.
Impact of encoding depth on awareness of perceptual effects in recognition memory.
Gardiner, J M; Gregg, V H; Mashru, R; Thaman, M
2001-04-01
Pictorial stimuli are more likely to be recognized if they are the same size, rather than a different size, at study and at test. This size congruency effect was replicated in two experiments in which the encoding variables were respectively undivided versus divided attention and level of processing. In terms of performance, these variables influenced recognition and did not influence size congruency effects. But in terms of awareness, measured by remember and know responses, these variables did influence size congruency effects. With undivided attention and with a deep level of processing, size congruency effects occurred only in remembering. With divided attention and with a shallow level of processing, size congruency effects occurred only in knowing. The results show that effects that occur in remembering may also occur independently in knowing. They support theories in which remembering and knowing reflect different memory processes or systems. They do not support the theory that remembering and knowing reflect differences in trace strength.
New trends in logic synthesis for both digital designing and data processing
NASA Astrophysics Data System (ADS)
Borowik, Grzegorz; Łuba, Tadeusz; Poźniak, Krzysztof
2016-09-01
FPGA devices are equipped with memory-based structures. These memories act as very large logic cells where the number of inputs equals the number of address lines. At the same time, there is a huge demand in the market of Internet of Things for devices implementing virtual routers, intrusion detection systems, etc.; where such memories are crucial for realizing pattern matching circuits, IP address tables, and other. Unfortunately, existing CAD tools are not well suited to utilize capabilities that such large memory blocks offer due to the lack of appropriate synthesis procedures. This paper presents methods which are useful for memory-based implementations: minimization of the number of input variables and functional decomposition.
Sex differences in visual-spatial working memory: A meta-analysis.
Voyer, Daniel; Voyer, Susan D; Saint-Aubin, Jean
2017-04-01
Visual-spatial working memory measures are widely used in clinical and experimental settings. Furthermore, it has been argued that the male advantage in spatial abilities can be explained by a sex difference in visual-spatial working memory. Therefore, sex differences in visual-spatial working memory have important implication for research, theory, and practice, but they have yet to be quantified. The present meta-analysis quantified the magnitude of sex differences in visual-spatial working memory and examined variables that might moderate them. The analysis used a set of 180 effect sizes from healthy males and females drawn from 98 samples ranging in mean age from 3 to 86 years. Multilevel meta-analysis was used on the overall data set to account for non-independent effect sizes. The data also were analyzed in separate task subgroups by means of multilevel and mixed-effects models. Results showed a small but significant male advantage (mean d = 0.155, 95 % confidence interval = 0.087-0.223). All the tasks produced a male advantage, except for memory for location, where a female advantage emerged. Age of the participants was a significant moderator, indicating that sex differences in visual-spatial working memory appeared first in the 13-17 years age group. Removing memory for location tasks from the sample affected the pattern of significant moderators. The present results indicate a male advantage in visual-spatial working memory, although age and specific task modulate the magnitude and direction of the effects. Implications for clinical applications, cognitive model building, and experimental research are discussed.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi
2018-06-18
Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.
Han, Seong-Ji; Glatman Zaretsky, Arielle; Andrade-Oliveira, Vinicius; Collins, Nicholas; Dzutsev, Amiran; Shaik, Jahangheer; Morais da Fonseca, Denise; Harrison, Oliver J; Tamoutounour, Samira; Byrd, Allyson L; Smelkinson, Margery; Bouladoux, Nicolas; Bliska, James B; Brenchley, Jason M; Brodsky, Igor E; Belkaid, Yasmine
2017-12-19
White adipose tissue bridges body organs and plays a fundamental role in host metabolism. To what extent adipose tissue also contributes to immune surveillance and long-term protective defense remains largely unknown. Here, we have shown that at steady state, white adipose tissue contained abundant memory lymphocyte populations. After infection, white adipose tissue accumulated large numbers of pathogen-specific memory T cells, including tissue-resident cells. Memory T cells in white adipose tissue expressed a distinct metabolic profile, and white adipose tissue from previously infected mice was sufficient to protect uninfected mice from lethal pathogen challenge. Induction of recall responses within white adipose tissue was associated with the collapse of lipid metabolism in favor of antimicrobial responses. Our results suggest that white adipose tissue represents a memory T cell reservoir that provides potent and rapid effector memory responses, positioning this compartment as a potential major contributor to immunological memory. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Diegelmann, Soeren; Zars, Melissa; Zars, Troy
2006-01-01
Memories can have different strengths, largely dependent on the intensity of reinforcers encountered. The relationship between reinforcement and memory strength is evident in asymptotic memory curves, with the level of the asymptote related to the intensity of the reinforcer. Although this is likely a fundamental property of memory formation,…
High speed optical object recognition processor with massive holographic memory
NASA Technical Reports Server (NTRS)
Chao, T.; Zhou, H.; Reyes, G.
2002-01-01
Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.
NASA Astrophysics Data System (ADS)
Shi, Wei; Wang, Jiulin; Zheng, Jianming; Jiang, Jiuchun; Viswanathan, Vilayanur; Zhang, Ji-Guang
2016-04-01
In this work, we systematically investigated the influence of the memory effect of LiFePO4 cathodes in large-format full batteries. The electrochemical performance of the electrodes used in these batteries was also investigated separately in half-cells to reveal their intrinsic properties. We noticed that the memory effect of LiFePO4/graphite cells depends not only on the maximum state of charge reached during the memory writing process, but is also affected by the depth of discharge reached during the memory writing process. In addition, the voltage deviation in a LiFePO4/graphite full battery is more complex than in a LiFePO4/Li half-cell, especially for a large-format battery, which exhibits a significant current variation in the region near its terminals. Therefore, the memory effect should be taken into account in advanced battery management systems to further extend the long-term cycling stabilities of Li-ion batteries using LiFePO4 cathodes.
Unsworth, Nash; Spillers, Gregory J; Brewer, Gene A
2012-01-01
Remembering previous experiences from one's personal past is a principal component of psychological well-being, personality, sense of self, decision making, and planning for the future. In the current study the ability to search for autobiographical information in memory was examined by having college students recall their Facebook friends. Individual differences in working memory capacity manifested itself in the search of autobiographical memory by way of the total number of friends remembered, the number of clusters of friends, size of clusters, and the speed with which participants could output their friends' names. Although working memory capacity was related to the ability to search autobiographical memory, participants did not differ in the manner in which they approached the search and used contextual cues to help query their memories. These results corroborate recent theorising, which suggests that working memory is a necessary component of self-generating contextual cues to strategically search memory for autobiographical information.
SU30. Long-Term Memory Deficits in Schizophrenia: Are All Things Equal?
Rossell, Susan
2017-01-01
Abstract Background: Kraepelin and Bleulernoted that patients with schizophrenia had significant cognitive deficits over a century ago; however, their observations with regard to long-term memory have not born out within empirical studies. They reported that episodic memory was intact but indicated that organization of memories, or semantic memory, was disordered. This study aimed to synthesize a century of research in the 2 long-term memory processes of episodic and semantic memory across the psychosis continuum: chronic patients, first-episode patients, high risk for psychosis cohorts, and persons with high schizotypy. Methods: A systematic review and meta-analysis was completed within the 2 domains of long-term memory across the psychosis continuum. Search terms included long-term memory, episodic, semantic, and derivations of these terms. The data were synthesized independently for episodic and semantic memory. Four independent populations were investigated: chronic patients, first-episode patients, high risk for psychosis cohorts, and persons with high schizotypy. Our approach followed the PRISMA guidelines. Thus, the pooled mean effect sizes are reported for 8 analyses. These effect sizes represent case cohort in comparison to a healthy control cohort. Results: The results were as follows, for episodic memory: chronic patients d = 1.12, first-episode patients d = 1.12, high risk d = 1.14, and high schizotypy d = 0.13. Thus, establishing that there is poor evidence of episodic memory deficits in persons with high schizotypy. For semantic memory, the literature showed a different pattern: chronic patients d = 1.2, first-episode patients d = 1.08, high risk d = 1.16, and high schizotypy d = 0.95. Thus, a consistent degree of semantic memory deficits across the continuum. Conclusion: The literature suggests a dissociated pattern of long-term memory deficits; whereby semantic memory abnormalities are more likely to be considered endophenotypes or cognitive markers for schizophrenia than episodic memory deficits. Differential patterns of semantic memory organization are argued to be present prior to the onset of the disorder. There is additional evidence to suggest that idiosyncratic storage of semantic material underlies the development of the usual beliefs and speech patterns present in the forms of delusions and formal thought disorder. Consequently, semantic memory might be a useful target for cognitive remediation.
Porous inorganic-organic shape memory polymers.
Zhang, Dawei; Burkes, William L; Schoener, Cody A; Grunlan, Melissa A
2012-06-21
Thermoresponsive shape memory polymers (SMPs) are a type of stimuli-sensitive materials that switch from a temporary shape back to their permanent shape upon exposure to heat. While the majority of SMPs have been fabricated in the solid form, porous SMP foams exhibit distinct properties and are better suited for certain applications, including some in the biomedical field. Like solid SMPs, SMP foams have been restricted to a limited group of organic polymer systems. In this study, we prepared inorganic-organic SMP foams based on the photochemical cure of a macromer comprised of inorganic polydimethylsiloxane (PDMS) segments and organic poly(ε-caprolactone) (PCL) segments, diacrylated PCL(40)-block-PDMS(37)-block-PCL(40). To achieve tunable pore size with high interconnectivity, the SMP foams were prepared via a refined solvent-casting/particulate-leaching (SCPL) method. By varying design parameters such as degree of salt fusion, macromer concentration in the solvent and salt particle size, the SMP foams with excellent shape memory behavior and tunable pore size, pore morphology, and modulus were obtained.
Child first language and adult second language are both tied to general-purpose learning systems.
Hamrick, Phillip; Lum, Jarrad A G; Ullman, Michael T
2018-02-13
Do the mechanisms underlying language in fact serve general-purpose functions that preexist this uniquely human capacity? To address this contentious and empirically challenging issue, we systematically tested the predictions of a well-studied neurocognitive theory of language motivated by evolutionary principles. Multiple metaanalyses were performed to examine predicted links between language and two general-purpose learning systems, declarative and procedural memory. The results tied lexical abilities to learning only in declarative memory, while grammar was linked to learning in both systems in both child first language and adult second language, in specific ways. In second language learners, grammar was associated with only declarative memory at lower language experience, but with only procedural memory at higher experience. The findings yielded large effect sizes and held consistently across languages, language families, linguistic structures, and tasks, underscoring their reliability and validity. The results, which met the predicted pattern, provide comprehensive evidence that language is tied to general-purpose systems both in children acquiring their native language and adults learning an additional language. Crucially, if language learning relies on these systems, then our extensive knowledge of the systems from animal and human studies may also apply to this domain, leading to predictions that might be unwarranted in the more circumscribed study of language. Thus, by demonstrating a role for these systems in language, the findings simultaneously lay a foundation for potentially important advances in the study of this critical domain.
Computational dissection of human episodic memory reveals mental process-specific genetic profiles
Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.
2015-01-01
Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317
Computational dissection of human episodic memory reveals mental process-specific genetic profiles.
Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F
2015-09-01
Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Scaling properties in time-varying networks with memory
NASA Astrophysics Data System (ADS)
Kim, Hyewon; Ha, Meesoon; Jeong, Hawoong
2015-12-01
The formation of network structure is mainly influenced by an individual node's activity and its memory, where activity can usually be interpreted as the individual inherent property and memory can be represented by the interaction strength between nodes. In our study, we define the activity through the appearance pattern in the time-aggregated network representation, and quantify the memory through the contact pattern of empirical temporal networks. To address the role of activity and memory in epidemics on time-varying networks, we propose temporal-pattern coarsening of activity-driven growing networks with memory. In particular, we focus on the relation between time-scale coarsening and spreading dynamics in the context of dynamic scaling and finite-size scaling. Finally, we discuss the universality issue of spreading dynamics on time-varying networks for various memory-causality tests.
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
[Anterograde declarative memory and its models].
Barbeau, E-J; Puel, M; Pariente, J
2010-01-01
Patient H.M.'s recent death provides the opportunity to highlight the importance of his contribution to a better understanding of the anterograde amnesic syndrome. The thorough study of this patient over five decades largely contributed to shape the unitary model of declarative memory. This model holds that declarative memory is a single system that cannot be fractionated into subcomponents. As a system, it depends mainly on medial temporal lobes structures. The objective of this review is to present the main characteristics of different modular models that have been proposed as alternatives to the unitary model. It is also an opportunity to present different patients, who, although less famous than H.M., helped make signification contribution to the field of memory. The characteristics of the five main modular models are presented, including the most recent one (the perceptual-mnemonic model). The differences as well as how these models converge are highlighted. Different possibilities that could help reconcile unitary and modular approaches are considered. Although modular models differ significantly in many aspects, all converge to the notion that memory for single items and semantic memory could be dissociated from memory for complex material and context-rich episodes. In addition, these models converge concerning the involvement of critical brain structures for these stages: Item and semantic memory, as well as familiarity, are thought to largely depend on anterior subhippocampal areas, while relational, context-rich memory and recollective experiences are thought to largely depend on the hippocampal formation. Copyright © 2010 Elsevier Masson SAS. All rights reserved.