Sample records for large memory requirement

  1. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    PubMed

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  2. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  3. A fast sequence assembly method based on compressed data structures.

    PubMed

    Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu

    2014-01-01

    Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.

  4. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  5. Memory Metals (Marchon Eyewear)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Another commercial application of memory metal technology is found in a "smart" eyeglass frame that remembers its shape and its wearer's fit. A patented "memory encoding process" makes this possible. Heat is not required to return the glasses to shape. A large commercial market is anticipated.

  6. Memory Efficient Ranking.

    ERIC Educational Resources Information Center

    Moffat, Alistair; And Others

    1994-01-01

    Describes an approximate document ranking process that uses a compact array of in-memory, low-precision approximations for document length. Combined with another rule for reducing the memory required by partial similarity accumulators, the approximation heuristic allows the ranking of large document collections using less than one byte of memory…

  7. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  8. The neural basis of involuntary episodic memories.

    PubMed

    Hall, Shana A; Rubin, David C; Miles, Amanda; Davis, Simon W; Wing, Erik A; Cabeza, Roberto; Berntsen, Dorthe

    2014-10-01

    Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that, because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented, panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a postscan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial-temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that, although there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems.

  9. The Neural Basis of Involuntary Episodic Memories

    PubMed Central

    Hall, Shana A.; Rubin, David C.; Miles, Amanda; Davis, Simon W.; Wing, Erik A.; Cabeza, Roberto; Berntsen, Dorthe

    2014-01-01

    Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a post-scan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that while there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems. PMID:24702453

  10. Technical support for digital systems technology development. Task order 1: ISP contention analysis and control

    NASA Technical Reports Server (NTRS)

    Stehle, Roy H.; Ogier, Richard G.

    1993-01-01

    Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.

  11. Semihierarchical quantum repeaters based on moderate lifetime quantum memories

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Zhou, Zong-Quan; Hua, Yi-Lin; Li, Chuan-Feng; Guo, Guang-Can

    2017-01-01

    The construction of large-scale quantum networks relies on the development of practical quantum repeaters. Many approaches have been proposed with the goal of outperforming the direct transmission of photons, but most of them are inefficient or difficult to implement with current technology. Here, we present a protocol that uses a semihierarchical structure to improve the entanglement distribution rate while reducing the requirement of memory time to a range of tens of milliseconds. This protocol can be implemented with a fixed distance of elementary links and fixed requirements on quantum memories, which are independent of the total distance. This configuration is especially suitable for scalable applications in large-scale quantum networks.

  12. Performance analysis and comparison of a minimum interconnections direct storage model with traditional neural bidirectional memories.

    PubMed

    Bhatti, A Aziz

    2009-12-01

    This study proposes an efficient and improved model of a direct storage bidirectional memory, improved bidirectional associative memory (IBAM), and emphasises the use of nanotechnology for efficient implementation of such large-scale neural network structures at a considerable lower cost reduced complexity, and less area required for implementation. This memory model directly stores the X and Y associated sets of M bipolar binary vectors in the form of (MxN(x)) and (MxN(y)) memory matrices, requires O(N) or about 30% of interconnections with weight strength ranging between +/-1, and is computationally very efficient as compared to sequential, intraconnected and other bidirectional associative memory (BAM) models of outer-product type that require O(N(2)) complex interconnections with weight strength ranging between +/-M. It is shown that it is functionally equivalent to and possesses all attributes of a BAM of outer-product type, and yet it is simple and robust in structure, very large scale integration (VLSI), optical and nanotechnology realisable, modular and expandable neural network bidirectional associative memory model in which the addition or deletion of a pair of vectors does not require changes in the strength of interconnections of the entire memory matrix. The analysis of retrieval process, signal-to-noise ratio, storage capacity and stability of the proposed model as well as of the traditional BAM has been carried out. Constraints on and characteristics of unipolar and bipolar binaries for improved storage and retrieval are discussed. The simulation results show that it has log(e) N times higher storage capacity, superior performance, faster convergence and retrieval time, when compared to traditional sequential and intraconnected bidirectional memories.

  13. An FMM-FFT Accelerated SIE Simulator for Analyzing EM Wave Propagation in Mine Environments Loaded With Conductors

    PubMed Central

    Sheng, Weitian; Zhou, Chenming; Liu, Yang; Bagci, Hakan; Michielssen, Eric

    2018-01-01

    A fast and memory efficient three-dimensional full-wave simulator for analyzing electromagnetic (EM) wave propagation in electrically large and realistic mine tunnels/galleries loaded with conductors is proposed. The simulator relies on Muller and combined field surface integral equations (SIEs) to account for scattering from mine walls and conductors, respectively. During the iterative solution of the system of SIEs, the simulator uses a fast multipole method-fast Fourier transform (FMM-FFT) scheme to reduce CPU and memory requirements. The memory requirement is further reduced by compressing large data structures via singular value and Tucker decompositions. The efficiency, accuracy, and real-world applicability of the simulator are demonstrated through characterization of EM wave propagation in electrically large mine tunnels/galleries loaded with conducting cables and mine carts. PMID:29726545

  14. Effects of motor congruence on visual working memory.

    PubMed

    Quak, Michel; Pecher, Diane; Zeelenberg, Rene

    2014-10-01

    Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.

  15. Tunnel field-effect transistor charge-trapping memory with steep subthreshold slope and large memory window

    NASA Astrophysics Data System (ADS)

    Kino, Hisashi; Fukushima, Takafumi; Tanaka, Tetsu

    2018-04-01

    Charge-trapping memory requires the increase of bit density per cell and a larger memory window for lower-power operation. A tunnel field-effect transistor (TFET) can achieve to increase the bit density per cell owing to its steep subthreshold slope. In addition, a TFET structure has an asymmetric structure, which is promising for achieving a larger memory window. A TFET with the N-type gate shows a higher electric field between the P-type source and the N-type gate edge than the conventional FET structure. This high electric field enables large amounts of charges to be injected into the charge storage layer. In this study, we fabricated silicon-oxide-nitride-oxide-semiconductor (SONOS) memory devices with the TFET structure and observed a steep subthreshold slope and a larger memory window.

  16. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  17. Division of attention as a function of the number of steps, visual shifts, and memory load

    NASA Technical Reports Server (NTRS)

    Chechile, R. A.; Butler, K.; Gutowski, W.; Palmer, E. A.

    1986-01-01

    The effects on divided attention of visual shifts and long-term memory retrieval during a monitoring task are considered. A concurrent vigilance task was standardized under all experimental conditions. The results show that subjects can perform nearly perfectly on all of the time-shared tasks if long-term memory retrieval is not required for monitoring. With the requirement of memory retrieval, however, there was a large decrease in accuracy for all of the time-shared activities. It was concluded that the attentional demand of longterm memory retrieval is appreciable (even for a well-learned motor sequence), and thus memory retrieval results in a sizable reduction in the capability of subjects to divide their attention. A selected bibliography on the divided attention literature is provided.

  18. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    PubMed

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  19. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Juhee; Lee, Sungpyo; Lee, Moo Hyung

    Quasi-unipolar non-volatile organic transistor memory (NOTM) can combine the best characteristics of conventional unipolar and ambipolar NOTMs and, as a result, exhibit improved device performance. Unipolar NOTMs typically exhibit a large signal ratio between the programmed and erased current signals but also require a large voltage to program and erase the memory cells. Meanwhile, an ambipolar NOTM can be programmed and erased at lower voltages, but the resulting signal ratio is small. By embedding a discontinuous n-type fullerene layer within a p-type pentacene film, quasi-unipolar NOTMs are fabricated, of which the signal storage utilizes both electrons and holes while themore » electrical signal relies on only hole conduction. These devices exhibit superior memory performance relative to both pristine unipolar pentacene devices and ambipolar fullerene/pentacene bilayer devices. The quasi-unipolar NOTM exhibited a larger signal ratio between the programmed and erased states while also reducing the voltage required to program and erase a memory cell. This simple approach should be readily applicable for various combinations of advanced organic semiconductors that have been recently developed and thereby should make a significant impact on organic memory research.« less

  1. Application-Controlled Demand Paging for Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)

    1997-01-01

    In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.

  2. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    USGS Publications Warehouse

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  3. Circuit engineering principles for construction of bipolar large-scale integrated circuit storage devices and very large-scale main memory

    NASA Astrophysics Data System (ADS)

    Neklyudov, A. A.; Savenkov, V. N.; Sergeyez, A. G.

    1984-06-01

    Memories are improved by increasing speed or the memory volume on a single chip. The most effective means for increasing speeds in bipolar memories are current control circuits with the lowest extraction times for a specific power consumption (1/4 pJ/bit). The control current circuitry involves multistage current switches and circuits accelerating transient processes in storage elements and links. Circuit principles for the design of bipolar memories with maximum speeds for an assigned minimum of circuit topology are analyzed. Two main classes of storage with current control are considered: the ECL type and super-integrated injection type storage with data capacities of N = 1/4 and N 4/16, respectively. The circuits reduce logic voltage differentials and the volumes of lexical and discharge buses and control circuit buses. The limiting speed is determined by the antiinterference requirements of the memory in storage and extraction modes.

  4. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  5. Cutting Edge: Protection by Antiviral Memory CD8 T Cells Requires Rapidly Produced Antigen in Large Amounts.

    PubMed

    Remakus, Sanda; Ma, Xueying; Tang, Lingjuan; Xu, Ren-Huan; Knudson, Cory; Melo-Silva, Carolina R; Rubio, Daniel; Kuo, Yin-Ming; Andrews, Andrew; Sigal, Luis J

    2018-05-15

    Numerous attempts to produce antiviral vaccines by harnessing memory CD8 T cells have failed. A barrier to progress is that we do not know what makes an Ag a viable target of protective CD8 T cell memory. We found that in mice susceptible to lethal mousepox (the mouse homolog of human smallpox), a dendritic cell vaccine that induced memory CD8 T cells fully protected mice when the infecting virus produced Ag in large quantities and with rapid kinetics. Protection did not occur when the Ag was produced in low amounts, even with rapid kinetics, and protection was only partial when the Ag was produced in large quantities but with slow kinetics. Hence, the amount and timing of Ag expression appear to be key determinants of memory CD8 T cell antiviral protective immunity. These findings may have important implications for vaccine design. Copyright © 2018 by The American Association of Immunologists, Inc.

  6. Space Radiation Effects in Advanced Flash Memories

    NASA Technical Reports Server (NTRS)

    Johnston, A. H.

    2001-01-01

    Memory storage requirements in space systems have steadily increased, much like storage requirements in terrestrial systems. Large arrays of dynamic memories (DRAMs) have been used in solid-state recorders, relying on a combination of shielding and error-detection-and correction (EDAC) to overcome the extreme sensitivity of DRAMs to space radiation. For example, a 2-Gbit memory (with 4-Mb DRAMs) used on the Clementine mission functioned perfectly during its moon mapping mission, in spite of an average of 71 memory bit flips per day from heavy ions. Although EDAC worked well with older types of memory circuits, newer DRAMs use extremely complex internal architectures which has made it increasingly difficult to implement EDAC. Some newer DRAMs have also exhibited catastrophic latchup. Flash memories are an intriguing alternative to DRAMs because of their nonvolatile storage and extremely high storage density, particularly for applications where writing is done relatively infrequently. This paper discusses radiation effects in advanced flash memories, including general observations on scaling and architecture as well as the specific experience obtained at the Jet Propulsion Laboratory in evaluating high-density flash memories for use on the NASA mission to Europa, one of Jupiter's moons. This particular mission must pass through the Jovian radiation belts, which imposes a very demanding radiation requirement.

  7. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  8. Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination

    PubMed Central

    2012-01-01

    Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725

  9. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  10. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  11. Factors modulating the effect of divided attention during retrieval of words.

    PubMed

    Fernandes, Myra A; Moscovitch, Morris

    2002-07-01

    In this study, we examined variables modulating interference effects on episodic memory under divided attention conditions during retrieval for a list of unrelated words. In Experiment 1, we found that distracting tasks that required animacy or syllable decisions to visually presented words, without a memory load, produced large interference on free recall performance. In Experiment 2, a distracting task requiring phonemic decisions about nonsense words produced a far larger interference effect than one that required semantic decisions about pictures. In Experiment 3, we replicated the effect of the nonsense-word distracting task on memory and showed that an equally resource-demanding picture-based task produced significant interference with memory retrieval, although the effect was smaller in magnitude. Taken together, the results suggest that free recall is disrupted by competition for phonological or word-form representations during retrieval and, to a lesser extent, by competition for semantic representations.

  12. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  14. Verbal learning on depressive pseudodementia: accentuate impairment of free recall, moderate on learning processes, and spared short-term and recognition memory.

    PubMed

    Paula, Jonas Jardim de; Miranda, Débora Marques; Nicolato, Rodrigo; Moraes, Edgar Nunes de; Bicalho, Maria Aparecida Camargos; Malloy-Diniz, Leandro Fernandes

    2013-09-01

    Depressive pseudodementia (DPD) is a clinical condition characterized by depressive symptoms followed by cognitive and functional impairment characteristics of dementia. Memory complaints are one of the most related cognitive symptoms in DPD. The present study aims to assess the verbal learning profile of elderly patients with DPD. Ninety-six older adults (34 DPD and 62 controls) were assessed by neuropsychological tests including the Rey auditory-verbal learning test (RAVLT). A multivariate general linear model was used to assess group differences and controlled for demographic factors. Moderate or large effects were found on all RAVLT components, except for short-term and recognition memory. DPD impairs verbal memory, with large effect size on free recall and moderate effect size on the learning. Short-term storage and recognition memory are useful in clinical contexts when the differential diagnosis is required.

  15. Compact Holographic Data Storage

    NASA Technical Reports Server (NTRS)

    Chao, T. H.; Reyes, G. F.; Zhou, H.

    2001-01-01

    NASA's future missions would require massive high-speed onboard data storage capability to Space Science missions. For Space Science, such as the Europa Lander mission, the onboard data storage requirements would be focused on maximizing the spacecraft's ability to survive fault conditions (i.e., no loss in stored science data when spacecraft enters the 'safe mode') and autonomously recover from them during NASA's long-life and deep space missions. This would require the development of non-volatile memory. In order to survive in the stringent environment during space exploration missions, onboard memory requirements would also include: (1) survive a high radiation environment (1 Mrad), (2) operate effectively and efficiently for a very long time (10 years), and (3) sustain at least a billion write cycles. Therefore, memory technologies requirements of NASA's Earth Science and Space Science missions are large capacity, non-volatility, high-transfer rate, high radiation resistance, high storage density, and high power efficiency. JPL, under current sponsorship from NASA Space Science and Earth Science Programs, is developing a high-density, nonvolatile and rad-hard Compact Holographic Data Storage (CHDS) system to enable large-capacity, high-speed, low power consumption, and read/write of data in a space environment. The entire read/write operation will be controlled with electrooptic mechanism without any moving parts. This CHDS will consist of laser diodes, photorefractive crystal, spatial light modulator, photodetector array, and I/O electronic interface. In operation, pages of information would be recorded and retrieved with random access and high-speed. The nonvolatile, rad-hard characteristics of the holographic memory will provide a revolutionary memory technology meeting the high radiation challenge facing the Europa Lander mission. Additional information is contained in the original extended abstract.

  16. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  17. TRPC3 channels critically regulate hippocampal excitability and contextual fear memory.

    PubMed

    Neuner, Sarah M; Wilmott, Lynda A; Hope, Kevin A; Hoffmann, Brian; Chong, Jayhong A; Abramowitz, Joel; Birnbaumer, Lutz; O'Connell, Kristen M; Tryba, Andrew K; Greene, Andrew S; Savio Chan, C; Kaczorowski, Catherine C

    2015-03-15

    Memory formation requires de novo protein synthesis, and memory disorders may result from misregulated synthesis of critical proteins that remain largely unidentified. Plasma membrane ion channels and receptors are likely candidates given their role in regulating neuron excitability, a candidate memory mechanism. Here we conduct targeted molecular monitoring and quantitation of hippocampal plasma membrane proteins from mice with intact or impaired contextual fear memory to identify putative candidates. Here we report contextual fear memory deficits correspond to increased Trpc3 gene and protein expression, and demonstrate TRPC3 regulates hippocampal neuron excitability associated with memory function. These data provide a mechanistic explanation for enhanced contextual fear memory reported herein following knockdown of TRPC3 in hippocampus. Collectively, TRPC3 modulates memory and may be a feasible target to enhance memory and treat memory disorders. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  18. VOP memory management in MPEG-4

    NASA Astrophysics Data System (ADS)

    Vaithianathan, Karthikeyan; Panchanathan, Sethuraman

    2001-03-01

    MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.

  19. Thermoreversible Folding as a Route to the Unique Shape-Memory Character in Ductile Polymer Networks.

    PubMed

    McBride, Matthew K; Podgorski, Maciej; Chatani, Shunsuke; Worrell, Brady T; Bowman, Christopher N

    2018-06-21

    Ductile, cross-linked films were folded as a means to program temporary shapes without the need for complex heating cycles or specialized equipment. Certain cross-linked polymer networks, formed here with the thiol-isocyanate reaction, possessed the ability to be pseudoplastically deformed below the glass transition, and the original shape was recovered during heating through the glass transition. To circumvent the large forces required to plastically deform a glassy polymer network, we have utilized folding, which localizes the deformation in small creases, and achieved large dimensional changes with simple programming procedures. In addition to dimension changes, three-dimensional objects such as swans and airplanes were developed to demonstrate applying origami principles to shape memory. We explored the fundamental mechanical properties that are required to fold polymer sheets and observed that a yield point that does not correspond to catastrophic failure is required. Unfolding occurred during heating through the glass transition, indicating the vitrification of the network that maintained the temporary, folded shape. Folding was demonstrated as a powerful tool to simply and effectively program ductile shape-memory polymers without the need for thermal cycling.

  20. Parallel Clustering Algorithm for Large-Scale Biological Data Sets

    PubMed Central

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246

  1. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  2. Set-Membership Identification for Robust Control Design

    DTIC Science & Technology

    1993-04-28

    system G can be regarded as having no memory in (18) in terms of G and 0, we get of events prior to t = 1, the initial time. Roughly, this means all...algorithm in [1]. Also in our application, the size of the matrices involved is quite large and special attention should be paid to the memory ...management and algorithmic implementation; otherwise huge amounts of memory will be required to perform the optimization even for modest values of M and N

  3. Multiplexed memory-insensitive quantum repeaters.

    PubMed

    Collins, O A; Jenkins, S D; Kuzmich, A; Kennedy, T A B

    2007-02-09

    Long-distance quantum communication via distant pairs of entangled quantum bits (qubits) is the first step towards secure message transmission and distributed quantum computing. To date, the most promising proposals require quantum repeaters to mitigate the exponential decrease in communication rate due to optical fiber losses. However, these are exquisitely sensitive to the lifetimes of their memory elements. We propose a multiplexing of quantum nodes that should enable the construction of quantum networks that are largely insensitive to the coherence times of the quantum memory elements.

  4. Verification of immune response optimality through cybernetic modeling.

    PubMed

    Batt, B C; Kompala, D S

    1990-02-09

    An immune response cascade that is T cell independent begins with the stimulation of virgin lymphocytes by antigen to differentiate into large lymphocytes. These immune cells can either replicate themselves or differentiate into plasma cells or memory cells. Plasma cells produce antibody at a specific rate up to two orders of magnitude greater than large lymphocytes. However, plasma cells have short life-spans and cannot replicate. Memory cells produce only surface antibody, but in the event of a subsequent infection by the same antigen, memory cells revert rapidly to large lymphocytes. Immunologic memory is maintained throughout the organism's lifetime. Many immunologists believe that the optimal response strategy calls for large lymphocytes to replicate first, then differentiate into plasma cells and when the antigen has been nearly eliminated, they form memory cells. A mathematical model incorporating the concept of cybernetics has been developed to study the optimality of the immune response. Derived from the matching law of microeconomics, cybernetic variables control the allocation of large lymphocytes to maximize the instantaneous antibody production rate at any time during the response in order to most efficiently inactivate the antigen. A mouse is selected as the model organism and bacteria as the replicating antigen. In addition to verifying the optimal switching strategy, results showing how the immune response is affected by antigen growth rate, initial antigen concentration, and the number of antibodies required to eliminate an antigen are included.

  5. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space.

    PubMed

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-07-01

    UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.

  6. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  7. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  8. Voluntary Running Depreciates the Requirement of Ca[superscript 2+]-Stimulated cAMP Signaling in Synaptic Potentiation and Memory Formation

    ERIC Educational Resources Information Center

    Zheng, Fei; Zhang, Ming; Ding, Qi; Sethna, Ferzin; Yan, Lily; Moon, Changjong; Yang, Miyoung; Wang, Hongbing

    2016-01-01

    Mental health and cognitive functions are influenced by both genetic and environmental factors. Although having active lifestyle with physical exercise improves learning and memory, how it interacts with the specific key molecular regulators of synaptic plasticity is largely unknown. Here, we examined the effects of voluntary running on long-term…

  9. Research on memory management in embedded systems

    NASA Astrophysics Data System (ADS)

    Huang, Xian-ying; Yang, Wu

    2005-12-01

    Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.

  10. Massively parallel support for a case-based planning system

    NASA Technical Reports Server (NTRS)

    Kettler, Brian P.; Hendler, James A.; Anderson, William A.

    1993-01-01

    Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.

  11. Planning paths to multiple targets: memory involvement and planning heuristics in spatial problem solving.

    PubMed

    Wiener, J M; Ehbauer, N N; Mallot, H A

    2009-09-01

    For large numbers of targets, path planning is a complex and computationally expensive task. Humans, however, usually solve such tasks quickly and efficiently. We present experiments studying human path planning performance and the cognitive processes and heuristics involved. Twenty-five places were arranged on a regular grid in a large room. Participants were repeatedly asked to solve traveling salesman problems (TSP), i.e., to find the shortest closed loop connecting a start location with multiple target locations. In Experiment 1, we tested whether humans employed the nearest neighbor (NN) strategy when solving the TSP. Results showed that subjects outperform the NN-strategy, suggesting that it is not sufficient to explain human route planning behavior. As a second possible strategy we tested a hierarchical planning heuristic in Experiment 2, demonstrating that participants first plan a coarse route on the region level that is refined during navigation. To test for the relevance of spatial working memory (SWM) and spatial long-term memory (LTM) for planning performance and the planning heuristics applied, we varied the memory demands between conditions in Experiment 2. In one condition the target locations were directly marked, such that no memory was required; a second condition required participants to memorize the target locations during path planning (SWM); in a third condition, additionally, the locations of targets had to retrieved from LTM (SWM and LTM). Results showed that navigation performance decreased with increasing memory demands while the dependence on the hierarchical planning heuristic increased.

  12. A fast low-power optical memory based on coupled micro-ring lasers

    NASA Astrophysics Data System (ADS)

    Hill, Martin T.; Dorren, Harmen J. S.; de Vries, Tjibbe; Leijtens, Xaveer J. M.; den Besten, Jan Hendrik; Smalbrugge, Barry; Oei, Yok-Siang; Binsma, Hans; Khoe, Giok-Djan; Smit, Meint K.

    2004-11-01

    The increasing speed of fibre-optic-based telecommunications has focused attention on high-speed optical processing of digital information. Complex optical processing requires a high-density, high-speed, low-power optical memory that can be integrated with planar semiconductor technology for buffering of decisions and telecommunication data. Recently, ring lasers with extremely small size and low operating power have been made, and we demonstrate here a memory element constructed by interconnecting these microscopic lasers. Our device occupies an area of 18 × 40µm2 on an InP/InGaAsP photonic integrated circuit, and switches within 20ps with 5.5fJ optical switching energy. Simulations show that the element has the potential for much smaller dimensions and switching times. Large numbers of such memory elements can be densely integrated and interconnected on a photonic integrated circuit: fast digital optical information processing systems employing large-scale integration should now be viable.

  13. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  14. The sensory strength of voluntary visual imagery predicts visual working memory capacity.

    PubMed

    Keogh, Rebecca; Pearson, Joel

    2014-10-09

    How much we can actively hold in mind is severely limited and differs greatly from one person to the next. Why some individuals have greater capacities than others is largely unknown. Here, we investigated why such large variations in visual working memory (VWM) capacity might occur, by examining the relationship between visual working memory and visual mental imagery. To assess visual working memory capacity participants were required to remember the orientation of a number of Gabor patches and make subsequent judgments about relative changes in orientation. The sensory strength of voluntary imagery was measured using a previously documented binocular rivalry paradigm. Participants with greater imagery strength also had greater visual working memory capacity. However, they were no better on a verbal number working memory task. Introducing a uniform luminous background during the retention interval of the visual working memory task reduced memory capacity, but only for those with strong imagery. Likewise, for the good imagers increasing background luminance during imagery generation reduced its effect on subsequent binocular rivalry. Luminance increases did not affect any of the subgroups on the verbal number working memory task. Together, these results suggest that luminance was disrupting sensory mechanisms common to both visual working memory and imagery, and not a general working memory system. The disruptive selectivity of background luminance suggests that good imagers, unlike moderate or poor imagers, may use imagery as a mnemonic strategy to perform the visual working memory task. © 2014 ARVO.

  15. Forming-free resistive switching characteristics of Ag/CeO2/Pt devices with a large memory window

    NASA Astrophysics Data System (ADS)

    Zheng, Hong; Kim, Hyung Jun; Yang, Paul; Park, Jong-Sung; Kim, Dong Wook; Lee, Hyun Ho; Kang, Chi Jung; Yoon, Tae-Sik

    2017-05-01

    Ag/CeO2(∼45 nm)/Pt devices exhibited forming-free bipolar resistive switching with a large memory window (low-resistance-state (LRS)/high-resistance-state (HRS) ratio >106) at a low switching voltage (<±1 ∼ 2 V) in voltage sweep condition. Also, they retained a large memory window (>104) at a pulse operation (±5 V, 50 μs). The high oxygen ionic conductivity of the CeO2 layer as well as the migration of silver facilitated the formation of filament for the transition to LRS at a low voltage without a high voltage forming operation. Also, a certain amount of defects in the CeO2 layer was required for stable HRS with space-charge-limited-conduction, which was confirmed comparing the devices with non-annealed and annealed CeO2 layers.

  16. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    NASA Astrophysics Data System (ADS)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  17. SODR Memory Control Buffer Control ASIC

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  18. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space

    PubMed Central

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-01-01

    Motivation: UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. Application: We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. Results: We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. Availability: A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request. Contact: lonshy@cs.huji.ac.il PMID:18586742

  19. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  20. Fast, noise-free memory for photon synchronization at room temperature.

    PubMed

    Finkelstein, Ran; Poem, Eilon; Michel, Ohad; Lahad, Ohr; Firstenberg, Ofer

    2018-01-01

    Future quantum photonic networks require coherent optical memories for synchronizing quantum sources and gates of probabilistic nature. We demonstrate a fast ladder memory (FLAME) mapping the optical field onto the superposition between electronic orbitals of rubidium vapor. Using a ladder-level system of orbital transitions with nearly degenerate frequencies simultaneously enables high bandwidth, low noise, and long memory lifetime. We store and retrieve 1.7-ns-long pulses, containing 0.5 photons on average, and observe short-time external efficiency of 25%, memory lifetime (1/ e ) of 86 ns, and below 10 -4 added noise photons. Consequently, coupling this memory to a probabilistic source would enhance the on-demand photon generation probability by a factor of 12, the highest number yet reported for a noise-free, room temperature memory. This paves the way toward the controlled production of large quantum states of light from probabilistic photon sources.

  1. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  2. Impact of Recent Hardware and Software Trends on High Performance Transaction Processing and Analytics

    NASA Astrophysics Data System (ADS)

    Mohan, C.

    In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.

  3. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  4. Inverse halftoning via robust nonlinear filtering

    NASA Astrophysics Data System (ADS)

    Shen, Mei-Yin; Kuo, C.-C. Jay

    1999-10-01

    A new blind inverse halftoning algorithm based on a nonlinear filtering technique of low computational complexity and low memory requirement is proposed in this research. It is called blind since we do not require the knowledge of the halftone kernel. The proposed scheme performs nonlinear filtering in conjunction with edge enhancement to improve the quality of an inverse halftoned image. Distinct features of the proposed approach include: efficiently smoothing halftone patterns in large homogeneous areas, additional edge enhancement capability to recover the edge quality and an excellent PSNR performance with only local integer operations and a small memory buffer.

  5. Distributed Memory Parallel Computing with SEAWAT

    NASA Astrophysics Data System (ADS)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources. Speed-ups up to 40 were obtained with the new PKS solver.

  6. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A B; de Supinski, B; Mueller, F

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even moremore » complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.« less

  7. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    PubMed

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  8. Using a Large-scale Neural Model of Cortical Object Processing to Investigate the Neural Substrate for Managing Multiple Items in Short-term Memory.

    PubMed

    Liu, Qin; Ulloa, Antonio; Horwitz, Barry

    2017-11-01

    Many cognitive and computational models have been proposed to help understand working memory. In this article, we present a simulation study of cortical processing of visual objects during several working memory tasks using an extended version of a previously constructed large-scale neural model [Tagamets, M. A., & Horwitz, B. Integrating electrophysiological and anatomical experimental data to create a large-scale model that simulates a delayed match-to-sample human brain imaging study. Cerebral Cortex, 8, 310-320, 1998]. The original model consisted of arrays of Wilson-Cowan type of neuronal populations representing primary and secondary visual cortices, inferotemporal (IT) cortex, and pFC. We added a module representing entorhinal cortex, which functions as a gating module. We successfully implemented multiple working memory tasks using the same model and produced neuronal patterns in visual cortex, IT cortex, and pFC that match experimental findings. These working memory tasks can include distractor stimuli or can require that multiple items be retained in mind during a delay period (Sternberg's task). Besides electrophysiology data and behavioral data, we also generated fMRI BOLD time series from our simulation. Our results support the involvement of IT cortex in working memory maintenance and suggest the cortical architecture underlying the neural mechanisms mediating particular working memory tasks. Furthermore, we noticed that, during simulations of memorizing a list of objects, the first and last items in the sequence were recalled best, which may implicate the neural mechanism behind this important psychological effect (i.e., the primacy and recency effect).

  9. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  10. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  11. Think globally and solve locally: secondary memory-based network learning for automated multi-species function prediction

    PubMed Central

    2014-01-01

    Background Network-based learning algorithms for automated function prediction (AFP) are negatively affected by the limited coverage of experimental data and limited a priori known functional annotations. As a consequence their application to model organisms is often restricted to well characterized biological processes and pathways, and their effectiveness with poorly annotated species is relatively limited. A possible solution to this problem might consist in the construction of big networks including multiple species, but this in turn poses challenging computational problems, due to the scalability limitations of existing algorithms and the main memory requirements induced by the construction of big networks. Distributed computation or the usage of big computers could in principle respond to these issues, but raises further algorithmic problems and require resources not satisfiable with simple off-the-shelf computers. Results We propose a novel framework for scalable network-based learning of multi-species protein functions based on both a local implementation of existing algorithms and the adoption of innovative technologies: we solve “locally” the AFP problem, by designing “vertex-centric” implementations of network-based algorithms, but we do not give up thinking “globally” by exploiting the overall topology of the network. This is made possible by the adoption of secondary memory-based technologies that allow the efficient use of the large memory available on disks, thus overcoming the main memory limitations of modern off-the-shelf computers. This approach has been applied to the analysis of a large multi-species network including more than 300 species of bacteria and to a network with more than 200,000 proteins belonging to 13 Eukaryotic species. To our knowledge this is the first work where secondary-memory based network analysis has been applied to multi-species function prediction using biological networks with hundreds of thousands of proteins. Conclusions The combination of these algorithmic and technological approaches makes feasible the analysis of large multi-species networks using ordinary computers with limited speed and primary memory, and in perspective could enable the analysis of huge networks (e.g. the whole proteomes available in SwissProt), using well-equipped stand-alone machines. PMID:24843788

  12. Dissecting neural pathways for forgetting in Drosophila olfactory aversive memory

    PubMed Central

    Shuai, Yichun; Hirokawa, Areekul; Ai, Yulian; Zhang, Min; Li, Wanhe; Zhong, Yi

    2015-01-01

    Recent studies have identified molecular pathways driving forgetting and supported the notion that forgetting is a biologically active process. The circuit mechanisms of forgetting, however, remain largely unknown. Here we report two sets of Drosophila neurons that account for the rapid forgetting of early olfactory aversive memory. We show that inactivating these neurons inhibits memory decay without altering learning, whereas activating them promotes forgetting. These neurons, including a cluster of dopaminergic neurons (PAM-β′1) and a pair of glutamatergic neurons (MBON-γ4>γ1γ2), terminate in distinct subdomains in the mushroom body and represent parallel neural pathways for regulating forgetting. Interestingly, although activity of these neurons is required for memory decay over time, they are not required for acute forgetting during reversal learning. Our results thus not only establish the presence of multiple neural pathways for forgetting in Drosophila but also suggest the existence of diverse circuit mechanisms of forgetting in different contexts. PMID:26627257

  13. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  14. Large capacity temporary visual memory.

    PubMed

    Endress, Ansgar D; Potter, Mary C

    2014-04-01

    Visual working memory (WM) capacity is thought to be limited to 3 or 4 items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor--proactive interference--is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation of 5-21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited, as in WM experiments, or has the much larger capacity found in the present experiments.

  15. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  16. Functional cross‐hemispheric shift between object‐place paired associate memory and spatial memory in the human hippocampus

    PubMed Central

    Lee, Choong‐Hee; Ryu, Jungwon; Lee, Sang‐Hun; Kim, Hakjin

    2016-01-01

    ABSTRACT The hippocampus plays critical roles in both object‐based event memory and spatial navigation, but it is largely unknown whether the left and right hippocampi play functionally equivalent roles in these cognitive domains. To examine the hemispheric symmetry of human hippocampal functions, we used an fMRI scanner to measure BOLD activity while subjects performed tasks requiring both object‐based event memory and spatial navigation in a virtual environment. Specifically, the subjects were required to form object‐place paired associate memory after visiting four buildings containing discrete objects in a virtual plus maze. The four buildings were visually identical, and the subjects used distal visual cues (i.e., scenes) to differentiate the buildings. During testing, the subjects were required to identify one of the buildings when cued with a previously associated object, and when shifted to a random place, the subject was expected to navigate to the previously chosen building. We observed that the BOLD activity foci changed from the left hippocampus to the right hippocampus as task demand changed from identifying a previously seen object (object‐cueing period) to searching for its paired‐associate place (object‐cued place recognition period). Furthermore, the efficient retrieval of object‐place paired associate memory (object‐cued place recognition period) was correlated with the BOLD response of the left hippocampus, whereas the efficient retrieval of relatively pure spatial memory (spatial memory period) was correlated with the right hippocampal BOLD response. These findings suggest that the left and right hippocampi in humans might process qualitatively different information for remembering episodic events in space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27009679

  17. Electronic shift register memory based on molecular electron-transfer reactions

    NASA Technical Reports Server (NTRS)

    Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.

    1989-01-01

    The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.

  18. A bio-inspired memory model for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhu, Yong

    2009-04-01

    Long-term structural health monitoring (SHM) systems need intelligent management of the monitoring data. By analogy with the way the human brain processes memories, we present a bio-inspired memory model (BIMM) that does not require prior knowledge of the structure parameters. The model contains three time-domain areas: a sensory memory area, a short-term memory area and a long-term memory area. First, the initial parameters of the structural state are specified to establish safety criteria. Then the large amount of monitoring data that falls within the safety limits is filtered while the data outside the safety limits are captured instantly in the sensory memory area. Second, disturbance signals are distinguished from danger signals in the short-term memory area. Finally, the stable data of the structural balance state are preserved in the long-term memory area. A strategy for priority scheduling via fuzzy c-means for the proposed model is then introduced. An experiment on bridge tower deformation demonstrates that the proposed model can be applied for real-time acquisition, limited-space storage and intelligent mining of the monitoring data in a long-term SHM system.

  19. Protein sequence comparison based on K-string dictionary.

    PubMed

    Yu, Chenglong; He, Rong L; Yau, Stephen S-T

    2013-10-25

    The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.

  20. Towards a Quantum Memory assisted MDI-QKD node

    NASA Astrophysics Data System (ADS)

    Namazi, Mehdi; Vallone, Giuseppe; Jordaan, Bertus; Goham, Connor; Shahrokhshahi, Reihaneh; Villoresi, Paolo; Figueroa, Eden

    2017-04-01

    The creation of large quantum network that permits the communication of quantum states and the secure distribution of cryptographic keys requires multiple operational quantum memories. In this work we present our progress towards building a prototypical quantum network that performs the memory-assisted measurement device independent QKD protocol. Currently our network combines the quantum part of the BB84 protocol with room-temperature quantum memory operation, while still maintaining relevant quantum bit error rates for single-photon level operation. We will also discuss our efforts to use a network of two room temperature quantum memories, receiving, storing and transforming randomly polarized photons in order to realize Bell state measurements. The work was supported by the US-Navy Office of Naval Research, Grant Number N00141410801, the National Science Foundation, Grant Number PHY-1404398 and the Simons Foundation, Grant Number SBF241180.

  1. Improving Unipolar Resistive Switching Uniformity with Cone-Shaped Conducting Filaments and Its Logic-In-Memory Application.

    PubMed

    Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei

    2018-02-21

    Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.

  2. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  3. On initial Brain Activity Mapping of episodic and semantic memory code in the hippocampus.

    PubMed

    Tsien, Joe Z; Li, Meng; Osan, Remus; Chen, Guifen; Lin, Longian; Wang, Phillip Lei; Frey, Sabine; Frey, Julietta; Zhu, Dajiang; Liu, Tianming; Zhao, Fang; Kuang, Hui

    2013-10-01

    It has been widely recognized that the understanding of the brain code would require large-scale recording and decoding of brain activity patterns. In 2007 with support from Georgia Research Alliance, we have launched the Brain Decoding Project Initiative with the basic idea which is now similarly advocated by BRAIN project or Brain Activity Map proposal. As the planning of the BRAIN project is currently underway, we share our insights and lessons from our efforts in mapping real-time episodic memory traces in the hippocampus of freely behaving mice. We show that appropriate large-scale statistical methods are essential to decipher and measure real-time memory traces and neural dynamics. We also provide an example of how the carefully designed, sometime thinking-outside-the-box, behavioral paradigms can be highly instrumental to the unraveling of memory-coding cell assembly organizing principle in the hippocampus. Our observations to date have led us to conclude that the specific-to-general categorical and combinatorial feature-coding cell assembly mechanism represents an emergent property for enabling the neural networks to generate and organize not only episodic memory, but also semantic knowledge and imagination. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  4. On Initial Brain Activity Mapping of Associative Memory Code in the Hippocampus

    PubMed Central

    Tsien, Joe Z.; Li, Meng; Osan, Remus; Chen, Guifen; Lin, Longian; Lei Wang, Phillip; Frey, Sabine; Frey, Julietta; Zhu, Dajiang; Liu, Tianming; Zhao, Fang; Kuang, Hui

    2013-01-01

    It has been widely recognized that the understanding of the brain code would require large-scale recording and decoding of brain activity patterns. In 2007 with support from Georgia Research Alliance, we have launched the Brain Decoding Project Initiative with the basic idea which is now similarly advocated by BRAIN project or Brain Activity Map proposal. As the planning of the BRAIN project is currently underway, we share our insights and lessons from our efforts in mapping real-time episodic memory traces in the hippocampus of freely behaving mice. We show that appropriate large-scale statistical methods are essential to decipher and measure real-time memory traces and neural dynamics. We also provide an example of how the carefully designed, sometime thinking-outside-the-box, behavioral paradigms can be highly instrumental to the unraveling of memory-coding cell assembly organizing principle in the hippocampus. Our observations to date have led us to conclude that the specific-to-general categorical and combinatorial feature-coding cell assembly mechanism represents an emergent property for enabling the neural networks to generate and organize not only episodic memory, but also semantic knowledge and imagination. PMID:23838072

  5. Dissociable loss of the representations in visual short-term memory.

    PubMed

    Li, Jie

    2016-01-01

    The present study investigated in what manner the information in visual short-term memory (VSTM) is lost. Participants memorized four items, one of which was given higher priority later by a retro-cue. Then participants were required to detect a possible change, which could be either a large or small change, occurred to one of the items. The results showed that the detection performance for the small change of the uncued items was poorer than the cued item, yet large change that occurred to all four memory items could be detected perfectly, indicating that the uncued representations lost some detailed information yet still had some basic features retained in VSTM. The present study suggests that after being encoded into VSTM, the information is not lost in an object-based manner; rather, features of an item are still dissociable, so that they can be lost separately.

  6. Large Declarative Memories in ACT-R

    DTIC Science & Technology

    2009-12-01

    containing the persistent DM of interest PDM-user Username required by the PostgreSQL DBMS for DB access PDM- passwd Password required by the PostgreSQL...34model-v5-DM" :pdm-user "Scott" :pdm- passwd “Open_Seseme" :pdm-resets-clear-db T :pdm-add-dm-serializes T :pdm-active T ... Figure 1: Activating and

  7. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    PubMed Central

    Klooster, Nathaniel B.; Cook, Susan W.; Uc, Ergun Y.; Duff, Melissa C.

    2015-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning. PMID:25628556

  8. Naive T-cell receptor transgenic T cells help memory B cells produce antibody

    PubMed Central

    Duffy, Darragh; Yang, Chun-Ping; Heath, Andrew; Garside, Paul; Bell, Eric B

    2006-01-01

    Injection of the same antigen following primary immunization induces a classic secondary response characterized by a large quantity of high-affinity antibody of an immunoglobulin G class produced more rapidly than in the initial response – the products of memory B cells are qualitatively distinct from that of the original naive B lymphocytes. Very little is known of the help provided by the CD4 T cells that stimulate memory B cells. Using antigen-specific T-cell receptor transgenic CD4 T cells (DO11.10) as a source of help, we found that naive transgenic T cells stimulated memory B cells almost as well (in terms of quantity and speed) as transgenic T cells that had been recently primed. There was a direct correlation between serum antibody levels and the number of naive transgenic T cells transferred. Using T cells from transgenic interleukin-2-deficient mice we showed that interleukin-2 was not required for a secondary response, although it was necessary for a primary response. The results suggested that the signals delivered by CD4 T cells and required by memory B cells for their activation were common to both antigen-primed and naive CD4 T cells. PMID:17067314

  9. Levels of word processing and incidental memory: dissociable mechanisms in the temporal lobe.

    PubMed

    Castillo, E M; Simos, P G; Davis, R N; Breier, J; Fitzgerald, M E; Papanicolaou, A C

    2001-11-16

    Word recall is facilitated when deep (e.g. semantic) processing is applied during encoding. This fact raises the question of the existence of specific brain mechanisms supporting different levels of information processing that can modulate incidental memory performance. In this study we obtained spatiotemporal brain activation profiles, using magnetic source imaging, from 10 adult volunteers as they performed a shallow (phonological) processing task and a deep (semantic) processing task. When phonological analysis of the word stimuli into their constituent phonemes was required, activation was largely restricted to the posterior portion of the left superior temporal gyrus (area 22). Conversely, when access to lexical/semantic representations was required, activation was found predominantly in the left middle temporal gyrus and medial temporal cortex. The differential engagement of each mechanism during word encoding was associated with dramatic changes in subsequent incidental memory performance.

  10. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  11. Influence of affective valence on working memory processes.

    PubMed

    Gotoh, Fumiko

    2008-02-01

    Recent research has revealed widespread effects of emotion on cognitive function and memory. However, the influence of affective valence on working or short-term memory remains largely unexplored. In two experiments, the present study examined the predictions that negative words would capture attention, that attention would be difficult to disengage from such negative words, and that the cost of attention switching would increase the time required to update information in working memory. Participants switched between two concurrent working memory tasks: word recognition and a working memory digit updating task. Experiment 1 showed substantial switching cost for negative words, relative to neutral words. Experiment 2 replicated the first experiment, using a self-report measure of anxiety to examine if switching cost is a function of an anxiety-related attention bias. Results did not support this hypothesis. In addition, Experiment 2 revealed switch costs for positive words without the effect of the attention bias from anxiety. The present study demonstrates the effect of affective valence on a specific component of working memory. Moreover, findings suggest why affective valence effects on working memory have not been found in previous research.

  12. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  13. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  14. Cognitive control over memory - individual differences in memory performance for emotional and neutral material.

    PubMed

    Wierzba, M; Riegel, M; Wypych, M; Jednoróg, K; Grabowska, A; Marchewka, A

    2018-02-28

    It is widely accepted that people differ in memory performance. The ability to control one's memory depends on multiple factors, including the emotional properties of the memorized material. While it was widely demonstrated that emotion can facilitate memory, it is unclear how emotion modifies our ability to suppress memory. One of the reasons for the lack of consensus among researchers is that individual differences in memory performance were largely neglected in previous studies. We used the directed forgetting paradigm in an fMRI study, in which subjects viewed neutral and emotional words, which they were instructed to remember or to forget. Subsequently, subjects' memory of these words was tested. Finally, they assessed the words on scales of valence, arousal, sadness and fear. We found that memory performance depended on instruction as reflected in the engagement of the lateral prefrontal cortex (lateral PFC), irrespective of emotional properties of words. While the lateral PFC engagement did not differ between neutral and emotional conditions, it correlated with behavioural performance when emotional - as opposed to neutral - words were presented. A deeper understanding of the underlying brain mechanisms is likely to require a study of individual differences in cognitive abilities to suppress memory.

  15. Large capacity temporary visual memory

    PubMed Central

    Endress, Ansgar D.; Potter, Mary C.

    2014-01-01

    Visual working memory (WM) capacity is thought to be limited to three or four items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor — proactive interference — is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation (RSVP) of 5 to 21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory, but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited as in WM experiments, or has the much larger capacity found in the present experiments. PMID:23937181

  16. Opiate-associated contextual memory formation and retrieval are differentially modulated by dopamine D1 and D2 signaling in hippocampal-prefrontal connectivity.

    PubMed

    Wang, Yunpeng; Zhang, Hongying; Cui, Jingjing; Zhang, Jing; Yin, Fangyuan; Guo, Hao; Lai, Jianghua; Xing, Bo

    2018-04-17

    Contextual memory driven by abused drugs such as opiates has a central role in maintenance and relapse of drug-taking behaviors. Although dopamine (DA) signaling favors memory storage and retrieval via regulation of hippocampal-prefrontal connectivity, its role in modulating opiate-associated contextual memory is largely unknown. Here, we report roles of DA signaling within the hippocampal-prefrontal circuit for opiate-related memories. Combining-conditioned place preference (CPP) with molecular analyses, we investigated the DA D1 receptor (D1R) and extracellular signal-regulated kinase (ERK)-cAMP-response element binding protein (CREB) signaling, as well as DA D2 receptor (D2R) and protein kinase B (PKB or Akt)/glycogen synthase kinase 3 (GSK3) signaling in the ventral hippocampus (vHip) and medial prefrontal cortex (mPFC) during the formation of opiate-related associative memories. Morphine-CPP acquisition increased the activity of the D1R-ERK-CREB pathway in both the vHip and mPFC. Morphine-CPP reinstatement was associated with the D2R-mediated hyperactive GSK3 via Akt inhibition in the vHip and PFC. Furthermore, integrated D1R-ERK-CREB and D2R-Akt-GSK3 pathways in the vHip-mPFC circuit are required for the acquisition and retrieval of the morphine contextual memory, respectively. Moreover, blockage of D1R or D2R signaling could alleviate normal Hip-dependent spatial memory. These results suggest that D1R and D2R signaling are differentially involved in the acquisition and retrieval of morphine contextual memory, and DA signaling in the vHip-mPFC connection contributes to morphine-associated and normal memory, largely depending on opiate exposure states.

  17. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    NASA Technical Reports Server (NTRS)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.

  18. A molecular shift register based on electron transfer

    NASA Technical Reports Server (NTRS)

    Hopfield, J. J.; Onuchic, Josenelson; Beratan, David N.

    1988-01-01

    An electronic shift-register memory at the molecular level is described. The memory elements are based on a chain of electron-transfer molecules and the information is shifted by photoinduced electron-transfer reactions. This device integrates designed electronic molecules onto a very large scale integrated (silicon microelectronic) substrate, providing an example of a 'molecular electronic device' that could actually be made. The design requirements for such a device and possible synthetic strategies are discussed. Devices along these lines should have lower energy usage and enhanced storage density.

  19. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  20. Rapid Encoding of New Memories by Individual Neurons in the Human Brain

    PubMed Central

    Ison, Matias J.; Quian Quiroga, Rodrigo; Fried, Itzhak

    2015-01-01

    Summary The creation of memories about real-life episodes requires rapid neuronal changes that may appear after a single occurrence of an event. How is such demand met by neurons in the medial temporal lobe (MTL), which plays a fundamental role in episodic memory formation? We recorded the activity of MTL neurons in neurosurgical patients while they learned new associations. Pairs of unrelated pictures, one of a person and another of a place, were used to construct a meaningful association modeling the episodic memory of meeting a person in a particular place. We found that a large proportion of responsive MTL neurons expanded their selectivity to encode these specific associations within a few trials: cells initially responsive to one picture started firing to the associated one but not to others. Our results provide a plausible neural substrate for the inception of associations, which are crucial for the formation of episodic memories. PMID:26139375

  1. Dissociating word stem completion and cued recall as a function of divided attention at retrieval.

    PubMed

    Clarke, A J Benjamin; Butler, Laurie T

    2008-10-01

    The aim of this study was to investigate the widely held, but largely untested, view that implicit memory (repetition priming) reflects an automatic form of retrieval. Specifically, in Experiment 1 we explored whether a secondary task (syllable monitoring), performed during retrieval, would disrupt performance on explicit (cued recall) and implicit (stem completion) memory tasks equally. Surprisingly, despite substantial memory and secondary costs to cued recall when performed with a syllable-monitoring task, the same manipulation had no effect on stem completion priming or on secondary task performance. In Experiment 2 we demonstrated that even when using a particularly demanding version of the stem completion task that incurred secondary task costs, the corresponding disruption to implicit memory performance was minimal. Collectively, the results are consistent with the view that implicit memory retrieval requires little or no processing capacity and is not seemingly susceptible to the effects of dividing attention at retrieval.

  2. Design of a fault tolerant airborne digital computer. Volume 2: Computational requirements and technology

    NASA Technical Reports Server (NTRS)

    Ratner, R. S.; Shapiro, E. B.; Zeidler, H. M.; Wahlstrom, S. E.; Clark, C. B.; Goldberg, J.

    1973-01-01

    This final report summarizes the work on the design of a fault tolerant digital computer for aircraft. Volume 2 is composed of two parts. Part 1 is concerned with the computational requirements associated with an advanced commercial aircraft. Part 2 reviews the technology that will be available for the implementation of the computer in the 1975-1985 period. With regard to the computation task 26 computations have been categorized according to computational load, memory requirements, criticality, permitted down-time, and the need to save data in order to effect a roll-back. The technology part stresses the impact of large scale integration (LSI) on the realization of logic and memory. Also considered was module interconnection possibilities so as to minimize fault propagation.

  3. Serotonin is critical for rewarded olfactory short-term memory in Drosophila.

    PubMed

    Sitaraman, Divya; LaFerriere, Holly; Birman, Serge; Zars, Troy

    2012-06-01

    The biogenic amines dopamine, octopamine, and serotonin are critical in establishing normal memories. A common view for the amines in insect memory performance has emerged in which dopamine and octopamine are largely responsible for aversive and appetitive memories. Examination of the function of serotonin begins to challenge the notion of one amine type per memory because altering serotonin function also reduces aversive olfactory memory and place memory levels. Could the function of serotonin be restricted to the aversive domain, suggesting a more specific dopamine/serotonin system interaction? The function of the serotonergic system in appetitive olfactory memory was examined. By targeting the tetanus toxin light chain (TNT) and the human inwardly rectifying potassium channel (Kir2.1) to the serotonin neurons with two different GAL4 driver combinations, the serotonergic system was inhibited. Additional use of the GAL80(ts1) system to control expression of transgenes to the adult stage of the life cycle addressed a potential developmental role of serotonin in appetitive memory. Reduction in appetitive olfactory memory performance in flies with these transgenic manipulations, without altering control behaviors, showed that the serotonergic system is also required for normal appetitive memory. Thus, serotonin appears to have a more general role in Drosophila memory, and implies an interaction with both the dopaminergic and octopaminergic systems.

  4. Electrical Switching of Perovskite Thin-Film Resistors

    NASA Technical Reports Server (NTRS)

    Liu, Shangqing; Wu, Juan; Ignatiev, Alex

    2010-01-01

    Electronic devices that exploit electrical switching of physical properties of thin films of perovskite materials (especially colossal magnetoresistive materials) have been invented. Unlike some related prior devices, these devices function at room temperature and do not depend on externally applied magnetic fields. Devices of this type can be designed to function as sensors (exhibiting varying electrical resistance in response to varying temperature, magnetic field, electric field, and/or mechanical pressure) and as elements of electronic memories. The underlying principle is that the application of one or more short electrical pulse(s) can induce a reversible, irreversible, or partly reversible change in the electrical, thermal, mechanical, and magnetic properties of a thin perovskite film. The energy in the pulse must be large enough to induce the desired change but not so large as to destroy the film. Depending on the requirements of a specific application, the pulse(s) can have any of a large variety of waveforms (e.g., square, triangular, or sine) and be of positive, negative, or alternating polarity. In some applications, it could be necessary to use multiple pulses to induce successive incremental physical changes. In one class of applications, electrical pulses of suitable shapes, sizes, and polarities are applied to vary the detection sensitivities of sensors. Another class of applications arises in electronic circuits in which certain resistance values are required to be variable: Incorporating the affected resistors into devices of the present type makes it possible to control their resistances electrically over wide ranges, and the lifetimes of electrically variable resistors exceed those of conventional mechanically variable resistors. Another and potentially the most important class of applications is that of resistance-based nonvolatile-memory devices, such as a resistance random access memory (RRAM) described in the immediately following article, Electrically Variable Resistive Memory Devices (MFS-32511-1).

  5. Interfering with free recall of words: Detrimental effects of phonological competition.

    PubMed

    Fernandes, Myra A; Wammes, Jeffrey D; Priselac, Sandra; Moscovitch, Morris

    2016-09-01

    We examined the effect of different distracting tasks, performed concurrently during memory retrieval, on recall of a list of words. By manipulating the type of material and processing (semantic, orthographic, and phonological) required in the distracting task, and comparing the magnitude of memory interference produced, we aimed to infer the kind of representation upon which retrieval of words depends. In Experiment 1, identifying odd digits concurrently during free recall disrupted memory, relative to a full attention condition, when the numbers were presented orthographically (e.g. nineteen), but not numerically (e.g. 19). In Experiment 2, a distracting task that required phonological-based decisions to either word or picture material produced large, but equivalent effects on recall of words. In Experiment 3, phonological-based decisions to pictures in a distracting task disrupted recall more than when the same pictures required semantically-based size estimations. In Experiment 4, a distracting task that required syllable decisions to line drawings interfered significantly with recall, while an equally difficult semantically-based color-decision task about the same line drawings, did not. Together, these experiments demonstrate that the degree of memory interference experienced during recall of words depends primarily on whether the distracting task competes for phonological representations or processes, and less on competition for semantic or orthographic or material-specific representations or processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Sentinel 2 MMFU: The first European Mass Memory System Based on NAND-Flash Storage Technology

    NASA Astrophysics Data System (ADS)

    Staehle, M.; Cassel, M.; Lonsdorfer, U.; Gliem, F.; Walter, D.; Fichna, T.

    2011-08-01

    Sentinel-2 is the multispectral optical mission of the EU-ESA GMES (Global Monitoring for Environment and Security) program, currently under development by Astrium-GmbH in Friedrichshafen (Germany) for a launch in 2013. The mission features a 490 Mbit/s optical sensor operating at high duty cycles, requiring in turn a large 2.4 Tbit on-board storage capacity.The required storage capacity motivated the selection of the NAND-Flash technology which was already secured by a lengthy period (2004-2009) of detailed testing, analysis and qualification by Astrium GmbH, IDA and ESTEC. The mass memory system is currently being realized by Astrium GmbH.

  7. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  8. Object-based benefits without object-based representations.

    PubMed

    Fougnie, Daryl; Cormiea, Sarah M; Alvarez, George A

    2013-08-01

    Influential theories of visual working memory have proposed that the basic units of memory are integrated object representations. Key support for this proposal is provided by the same object benefit: It is easier to remember multiple features of a single object than the same set of features distributed across multiple objects. Here, we replicate the object benefit but demonstrate that features are not stored as single, integrated representations. Specifically, participants could remember 10 features better when arranged in 5 objects compared to 10 objects, yet memory for one object feature was largely independent of memory for the other object feature. These results rule out the possibility that integrated representations drive the object benefit and require a revision of the concept of object-based memory representations. We propose that working memory is object-based in regard to the factors that enhance performance but feature based in regard to the level of representational failure. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. ERP correlates of object recognition memory in Down syndrome: Do active and passive tasks measure the same thing?

    PubMed

    Van Hoogmoed, A H; Nadel, L; Spanò, G; Edgin, J O

    2016-02-01

    Event related potentials (ERPs) can help to determine the cognitive and neural processes underlying memory functions and are often used to study populations with severe memory impairment. In healthy adults, memory is typically assessed with active tasks, while in patient studies passive memory paradigms are generally used. In this study we examined whether active and passive continuous object recognition tasks measure the same underlying memory process in typically developing (TD) adults and in individuals with Down syndrome (DS), a population with known hippocampal impairment. We further explored how ERPs in these tasks relate to behavioral measures of memory. Data-driven analysis techniques revealed large differences in old-new effects in the active versus passive task in TD adults, but no difference between these tasks in DS. The group with DS required additional processing in the active task in comparison to the TD group in two ways. First, the old-new effect started 150 ms later. Second, more repetitions were required to show the old-new effect. In the group with DS, performance on a behavioral measure of object-location memory was related to ERP measures across both tasks. In total, our results suggest that active and passive ERP memory measures do not differ in DS and likely reflect the use of implicit memory, but not explicit processing, on both tasks. Our findings highlight the need for a greater understanding of the comparison between active and passive ERP paradigms before they are inferred to measure similar functions across populations (e.g., infants or intellectual disability). Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The epigenetic basis of memory formation and storage.

    PubMed

    Jarome, Timothy J; Thomas, Jasmyne S; Lubin, Farah D

    2014-01-01

    The formation of long-term memory requires a series of cellular and molecular changes that involve transcriptional regulation of gene expression. While these changes in gene transcription were initially thought to be largely regulated by the activation of transcription factors by intracellular signaling molecules, epigenetic mechanisms have emerged as an important regulator of transcriptional processes across multiple brain regions to form a memory circuit for a learned event or experience. Due to their self-perpetuating nature and ability to bidirectionally control gene expression, these epigenetic mechanisms have the potential to not only regulate initial memory formation but also modify and update memory over time. This chapter focuses on the established, but poorly understood, role for epigenetic mechanisms such as posttranslational modifications of histone proteins and DNA methylation at the different stages of memory storage. Additionally, this chapter emphasizes how these mechanisms interact to control the ideal epigenetic environment for memory formation and modification in neurons. The reader will gain insights into the limitations in our current understanding of epigenetic regulation of memory storage, especially in terms of their cell-type specificity and the lack of understanding in the interactions of various epigenetic modifiers to one another to impact gene expression changes during memory formation.

  11. Optimization of an organic memristor as an adaptive memory element

    NASA Astrophysics Data System (ADS)

    Berzina, Tatiana; Smerieri, Anteo; Bernabò, Marco; Pucci, Andrea; Ruggeri, Giacomo; Erokhin, Victor; Fontana, M. P.

    2009-06-01

    The combination of memory and signal handling characteristics of a memristor makes it a promising candidate for adaptive bioinspired information processing systems. This poses stringent requirements on the basic device, such as stability and reproducibility over a large number of training/learning cycles, and a large anisotropy in the fundamental control material parameter, in our case the electrical conductivity. In this work we report results on the improved performance of electrochemically controlled polymeric memristors, where optimization of a conducting polymer (polyaniline) in the active channel and better environmental control of fabrication methods led to a large increase both in the absolute values of the conductivity in the partially oxydized state of polyaniline and of the on-off conductivity ratio. These improvements are crucial for the application of the organic memristor to adaptive complex signal handling networks.

  12. Multidimensional NMR inversion without Kronecker products: Multilinear inversion

    NASA Astrophysics Data System (ADS)

    Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos

    2016-08-01

    Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.

  13. Application of holographic optical techniques to bulk memory.

    NASA Technical Reports Server (NTRS)

    Anderson, L. K.

    1971-01-01

    Current efforts to exploit the spatial redundancy and built-in imaging of holographic optical techniques to provide high information densities without critical alignment and tight mechanical tolerances are reviewed. Read-write-erase in situ operation is possible but is presently impractical because of limitations in available recording media. As these are overcome, it should prove feasible to build holographic bulk memories with mechanically replaceable hologram plates featuring very fast (less than 2 microsec) random access to large (greater than 100 million bit) data blocks and very high throughput (greater than 500 Mbit/sec). Using volume holographic storage it may eventually be possible to realize random-access mass memories which require no mechanical motion and yet provide very high capacity.

  14. An Adaptive Memory Interface Controller for Improving Bandwidth Utilization of Hybrid and Reconfigurable Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio

    Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less

  15. Mild cognitive impairment and prospective memory: translating the evidence into neuropsychological practice.

    PubMed

    Kinsella, Glynda J; Pike, Kerryn E; Cavuoto, Marina G; Lee, Stephen D

    2018-04-30

    There has been a recent rapid development of research characterizing prospective memory performance in mild cognitive impairment (MCI) in older age. However, this body of literature remains largely separated from routine clinical practice in neuropsychology. Furthermore, there is emerging evidence of effective interventions to improve prospective memory performance. Therefore, our objective in this article was to offer a clinical neuropsychological perspective on the existing research in order to facilitate the translation of the evidence-base into clinical practice. By conducting a critical review of the existing research related to prospective memory and MCI, we highlight how this data can be introduced into clinical practice, either within diagnostic assessment or clinical management. Prospective memory is impaired in older adults with MCI, with a pattern of performance that helps with differential diagnosis from healthy aging. Clinical neuropsychologists are encouraged to add prospective memory assessment to their toolbox for diagnostic evaluation of clients with MCI. Preliminary findings of prospective memory interventions in MCI are promising, but more work is required to determine how different approaches translate to increasing independence in everyday life.

  16. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  17. The influence of cognitive load on spatial search performance.

    PubMed

    Longstaffe, Kate A; Hood, Bruce M; Gilchrist, Iain D

    2014-01-01

    During search, executive function enables individuals to direct attention to potential targets, remember locations visited, and inhibit distracting information. In the present study, we investigated these executive processes in large-scale search. In our tasks, participants searched a room containing an array of illuminated locations embedded in the floor. The participants' task was to press the switches at the illuminated locations on the floor so as to locate a target that changed color when pressed. The perceptual salience of the search locations was manipulated by having some locations flashing and some static. Participants were more likely to search at flashing locations, even when they were explicitly informed that the target was equally likely to be at any location. In large-scale search, attention was captured by the perceptual salience of the flashing lights, leading to a bias to explore these targets. Despite this failure of inhibition, participants were able to restrict returns to previously visited locations, a measure of spatial memory performance. Participants were more able to inhibit exploration to flashing locations when they were not required to remember which locations had previously been visited. A concurrent digit-span memory task further disrupted inhibition during search, as did a concurrent auditory attention task. These experiments extend a load theory of attention to large-scale search, which relies on egocentric representations of space. High cognitive load on working memory leads to increased distractor interference, providing evidence for distinct roles for the executive subprocesses of memory and inhibition during large-scale search.

  18. Silicon photonic integrated circuits with electrically programmable non-volatile memory functions.

    PubMed

    Song, J-F; Lim, A E-J; Luo, X-S; Fang, Q; Li, C; Jia, L X; Tu, X-G; Huang, Y; Zhou, H-F; Liow, T-Y; Lo, G-Q

    2016-09-19

    Conventional silicon photonic integrated circuits do not normally possess memory functions, which require on-chip power in order to maintain circuit states in tuned or field-configured switching routes. In this context, we present an electrically programmable add/drop microring resonator with a wavelength shift of 426 pm between the ON/OFF states. Electrical pulses are used to control the choice of the state. Our experimental results show a wavelength shift of 2.8 pm/ms and a light intensity variation of ~0.12 dB/ms for a fixed wavelength in the OFF state. Theoretically, our device can accommodate up to 65 states of multi-level memory functions. Such memory functions can be integrated into wavelength division mutiplexing (WDM) filters and applied to optical routers and computing architectures fulfilling large data downloading demands.

  19. Long-term memory consolidation: The role of RNA-binding proteins with prion-like domains.

    PubMed

    Sudhakaran, Indulekha P; Ramaswami, Mani

    2017-05-04

    Long-term and short-term memories differ primarily in the duration of their retention. At a molecular level, long-term memory (LTM) is distinguished from short-term memory (STM) by its requirement for new gene expression. In addition to transcription (nuclear gene expression) the translation of stored mRNAs is necessary for LTM formation. The mechanisms and functions for temporal and spatial regulation of mRNAs required for LTM is a major contemporary problem, of interest from molecular, cell biological, neurobiological and clinical perspectives. This review discusses primary evidence in support for translational regulatory events involved in LTM and a model in which different phases of translation underlie distinct phases of consolidation of memories. However, it focuses largely on mechanisms of memory persistence and the role of prion-like domains in this defining aspect of long-term memory. We consider primary evidence for the concept that Cytoplasmic Polyadenylation Element Binding (CPEB) protein enables the persistence of formed memories by transforming in prion-like manner from a soluble monomeric state to a self-perpetuating and persistent polymeric translationally active state required for maintaining persistent synaptic plasticity. We further discuss prion-like domains prevalent on several other RNA-binding proteins involved in neuronal translational control underlying LTM. Growing evidence indicates that such RNA regulatory proteins are components of mRNP (RiboNucleoProtein) granules. In these proteins, prion-like domains, being intrinsically disordered, could mediate weak transient interactions that allow the assembly of RNP granules, a source of silenced mRNAs whose translation is necessary for LTM. We consider the structural bases for RNA granules formation as well as functions of disordered domains and discuss how these complicate the interpretation of existing experimental data relevant to general mechanisms by which prion-domain containing RBPs function in synapse specific plasticity underlying LTM.

  20. Hierarchical Traces for Reduced NSM Memory Requirements

    NASA Astrophysics Data System (ADS)

    Dahl, Torbjørn S.

    This paper presents work on using hierarchical long term memory to reduce the memory requirements of nearest sequence memory (NSM) learning, a previously published, instance-based reinforcement learning algorithm. A hierarchical memory representation reduces the memory requirements by allowing traces to share common sub-sequences. We present moderated mechanisms for estimating discounted future rewards and for dealing with hidden state using hierarchical memory. We also present an experimental analysis of how the sub-sequence length affects the memory compression achieved and show that the reduced memory requirements do not effect the speed of learning. Finally, we analyse and discuss the persistence of the sub-sequences independent of specific trace instances.

  1. Tracking the Time-Dependent Role of the Hippocampus in Memory Recall Using DREADDs.

    PubMed

    Varela, Carmen; Weiss, Sarah; Meyer, Retsina; Halassa, Michael; Biedenkapp, Joseph; Wilson, Matthew A; Goosens, Ki Ann; Bendor, Daniel

    2016-01-01

    The hippocampus is critical for the storage of new autobiographical experiences as memories. Following an initial encoding stage in the hippocampus, memories undergo a process of systems-level consolidation, which leads to greater stability through time and an increased reliance on neocortical areas for retrieval. The extent to which the retrieval of these consolidated memories still requires the hippocampus is unclear, as both spared and severely degraded remote memory recall have been reported following post-training hippocampal lesions. One difficulty in definitively addressing the role of the hippocampus in remote memory retrieval is the precision with which the entire volume of the hippocampal region can be inactivated. To address this issue, we used Designer Receptors Exclusively Activated by Designer Drugs (DREADDs), a chemical-genetic tool capable of highly specific neuronal manipulation over large volumes of brain tissue. We find that remote (>7 weeks after acquisition), but not recent (1-2 days after acquisition) contextual fear memories can be recalled after injection of the DREADD agonist (CNO) in animals expressing the inhibitory DREADD in the entire hippocampus. Our data demonstrate a time-dependent role of the hippocampus in memory retrieval, supporting the standard model of systems consolidation.

  2. Ageing and feature binding in visual working memory: The role of presentation time.

    PubMed

    Rhodes, Stephen; Parra, Mario A; Logie, Robert H

    2016-01-01

    A large body of research has clearly demonstrated that healthy ageing is accompanied by an associative memory deficit. Older adults exhibit disproportionately poor performance on memory tasks requiring the retention of associations between items (e.g., pairs of unrelated words). In contrast to this robust deficit, older adults' ability to form and temporarily hold bound representations of an object's surface features, such as colour and shape, appears to be relatively well preserved. However, the findings of one set of experiments suggest that older adults may struggle to form temporary bound representations in visual working memory when given more time to study objects. However, these findings were based on between-participant comparisons across experimental paradigms. The present study directly assesses the role of presentation time in the ability of younger and older adults to bind shape and colour in visual working memory using a within-participant design. We report new evidence that giving older adults longer to study memory objects does not differentially affect their immediate memory for feature combinations relative to individual features. This is in line with a growing body of research suggesting that there is no age-related impairment in immediate memory for colour-shape binding.

  3. Tracking the Time-Dependent Role of the Hippocampus in Memory Recall Using DREADDs

    PubMed Central

    Varela, Carmen; Weiss, Sarah; Meyer, Retsina; Halassa, Michael; Biedenkapp, Joseph; Wilson, Matthew A.; Goosens, Ki Ann

    2016-01-01

    The hippocampus is critical for the storage of new autobiographical experiences as memories. Following an initial encoding stage in the hippocampus, memories undergo a process of systems-level consolidation, which leads to greater stability through time and an increased reliance on neocortical areas for retrieval. The extent to which the retrieval of these consolidated memories still requires the hippocampus is unclear, as both spared and severely degraded remote memory recall have been reported following post-training hippocampal lesions. One difficulty in definitively addressing the role of the hippocampus in remote memory retrieval is the precision with which the entire volume of the hippocampal region can be inactivated. To address this issue, we used Designer Receptors Exclusively Activated by Designer Drugs (DREADDs), a chemical-genetic tool capable of highly specific neuronal manipulation over large volumes of brain tissue. We find that remote (>7 weeks after acquisition), but not recent (1–2 days after acquisition) contextual fear memories can be recalled after injection of the DREADD agonist (CNO) in animals expressing the inhibitory DREADD in the entire hippocampus. Our data demonstrate a time-dependent role of the hippocampus in memory retrieval, supporting the standard model of systems consolidation. PMID:27145133

  4. Composing Data Parallel Code for a SPARQL Graph Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less

  5. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  6. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  7. Remote Memory and Cortical Synaptic Plasticity Require Neuronal CCCTC-Binding Factor (CTCF).

    PubMed

    Kim, Somi; Yu, Nam-Kyung; Shim, Kyu-Won; Kim, Ji-Il; Kim, Hyopil; Han, Dae Hee; Choi, Ja Eun; Lee, Seung-Woo; Choi, Dong Il; Kim, Myung Won; Lee, Dong-Sung; Lee, Kyungmin; Galjart, Niels; Lee, Yong-Seok; Lee, Jae-Hyung; Kaang, Bong-Kiun

    2018-05-30

    The molecular mechanism of long-term memory has been extensively studied in the context of the hippocampus-dependent recent memory examined within several days. However, months-old remote memory maintained in the cortex for long-term has not been investigated much at the molecular level yet. Various epigenetic mechanisms are known to be important for long-term memory, but how the 3D chromatin architecture and its regulator molecules contribute to neuronal plasticity and systems consolidation is still largely unknown. CCCTC-binding factor (CTCF) is an 11-zinc finger protein well known for its role as a genome architecture molecule. Male conditional knock-out mice in which CTCF is lost in excitatory neurons during adulthood showed normal recent memory in the contextual fear conditioning and spatial water maze tasks. However, they showed remarkable impairments in remote memory in both tasks. Underlying the remote memory-specific phenotypes, we observed that female CTCF conditional knock-out mice exhibit disrupted cortical LTP, but not hippocampal LTP. Similarly, we observed that CTCF deletion in inhibitory neurons caused partial impairment of remote memory. Through RNA sequencing, we observed that CTCF knockdown in cortical neuron culture caused altered expression of genes that are highly involved in cell adhesion, synaptic plasticity, and memory. These results suggest that remote memory storage in the cortex requires CTCF-mediated gene regulation in neurons, whereas recent memory formation in the hippocampus does not. SIGNIFICANCE STATEMENT CCCTC-binding factor (CTCF) is a well-known 3D genome architectural protein that regulates gene expression. Here, we use two different CTCF conditional knock-out mouse lines and reveal, for the first time, that CTCF is critically involved in the regulation of remote memory. We also show that CTCF is necessary for appropriate expression of genes, many of which we found to be involved in the learning- and memory-related processes. Our study provides behavioral and physiological evidence for the involvement of CTCF-mediated gene regulation in the remote long-term memory and elucidates our understanding of systems consolidation mechanisms. Copyright © 2018 the authors 0270-6474/18/385042-11$15.00/0.

  8. A Cerebellar-model Associative Memory as a Generalized Random-access Memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1989-01-01

    A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.

  9. Distributed memory parallel Markov random fields using graph partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinemann, C.; Perciano, T.; Ushizima, D.

    Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less

  10. Quantum teleportation between remote atomic-ensemble quantum memories.

    PubMed

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-12-11

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.

  11. An onboard data analysis method to track the seasonal polar caps on Mars

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Castano, Rebecca; Chien, Steve; Ivanov, Anton B.; Pounders, Erik; Titus, Timothy N.

    2005-01-01

    In this paper, we evaluate our method on uncalibrated THEMIS data and find 1) agreement with manual cap edge identifications to within 28.2 km, and 2) high accuracy even with a reduced context window, yielding large reductions in memory requirements.

  12. Scaling Irregular Applications through Data Aggregation and Software Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Tumeo, Antonino; Chavarría-Miranda, Daniel

    Bioinformatics, data analytics, semantic databases, knowledge discovery are emerging high performance application areas that exploit dynamic, linked data structures such as graphs, unbalanced trees or unstructured grids. These data structures usually are very large, requiring significantly more memory than available on single shared memory systems. Additionally, these data structures are difficult to partition on distributed memory systems. They also present poor spatial and temporal locality, thus generating unpredictable memory and network accesses. The Partitioned Global Address Space (PGAS) programming model seems suitable for these applications, because it allows using a shared memory abstraction across distributed-memory clusters. However, current PGAS languagesmore » and libraries are built to target regular remote data accesses and block transfers. Furthermore, they usually rely on the Single Program Multiple Data (SPMD) parallel control model, which is not well suited to the fine grained, dynamic and unbalanced parallelism of irregular applications. In this paper we present {\\bf GMT} (Global Memory and Threading library), a custom runtime library that enables efficient execution of irregular applications on commodity clusters. GMT integrates a PGAS data substrate with simple fork/join parallelism and provides automatic load balancing on a per node basis. It implements multi-level aggregation and lightweight multithreading to maximize memory and network bandwidth with fine-grained data accesses and tolerate long data access latencies. A key innovation in the GMT runtime is its thread specialization (workers, helpers and communication threads) that realize the overall functionality. We compare our approach with other PGAS models, such as UPC running using GASNet, and hand-optimized MPI code on a set of typical large-scale irregular applications, demonstrating speedups of an order of magnitude.« less

  13. Effects of hippocampal lesions on the monkey's ability to learn large sets of object-place associations.

    PubMed

    Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer

    2006-01-01

    Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.

  14. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  15. LRP8-Reelin-regulated Neuronal (LRN) Enhancer Signature Underlying Learning and Memory Formation

    PubMed Central

    Telese, Francesca; Ma, Qi; Perez, Patricia Montilla; Notani, Dimple; Oh, Soohwan; Li, Wenbo; Comoletti, Davide; Ohgi, Kenneth A.; Taylor, Havilah; Rosenfeld, Michael G.

    2015-01-01

    Summary One of the exceptional properties of the brain is its ability to acquire new knowledge through learning and to store that information through memory. The epigenetic mechanisms linking changes in neuronal transcriptional programs to behavioral plasticity remain largely unknown. Here, we identify the epigenetic signature of the neuronal enhancers required for transcriptional regulation of synaptic plasticity genes during memory formation, linking this to Reelin signaling. The binding of Reelin to its receptor, LRP8, triggers activation of this cohort of LRP8-Reelin-regulated-Neuronal (LRN) enhancers that serve as the ultimate convergence point of a novel synapse-to-nucleus pathway. Reelin simultaneously regulates NMDA-receptor transmission, which reciprocally permits the required, γ-secretase-dependent cleavage of LRP8, revealing an unprecedented role for its intracellular domain in the regulation of synaptically generated signals. These results uncover an in vivo enhancer code serving as a critical molecular component of cognition and relevant to psychiatric disorders linked to defects in Reelin signaling. PMID:25892301

  16. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions.

    PubMed

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-08-04

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.

  17. Large and reversible inverse magnetocaloric effect in Ni48.1Co2.9Mn35.0In14.0 metamagnetic shape memory microwire

    NASA Astrophysics Data System (ADS)

    Qu, Y. H.; Cong, D. Y.; Chen, Z.; Gui, W. Y.; Sun, X. M.; Li, S. H.; Ma, L.; Wang, Y. D.

    2017-11-01

    High-performance magnetocaloric materials should have a large reversible magnetocaloric effect and good heat exchange capability. Here, we developed a Ni48.1Co2.9Mn35.0In14.0 metamagnetic shape memory microwire with a large and reversible inverse magnetocaloric effect. As compared to the bulk counterpart, the microwire shows a better combination of magnetostructural transformation parameters (magnetization difference across transformation ΔM, transformation entropy change ΔStr, thermal hysteresis ΔThys, and transformation interval ΔTint) and thus greatly reduced critical field required for complete and reversible magnetic-field-induced transformation. A strong and reversible metamagnetic transition occurred in the microwire, which facilitates the achievement of large reversible magnetoresponsive effects. Consequently, a large and reversible magnetic-field-induced entropy change ΔSm of 12.8 J kg-1 K-1 under 5 T was achieved in the microwire, which is the highest value reported heretofore in Ni-Mn-based magnetic shape memory wires. Furthermore, since microwires have a high surface/volume ratio, they exhibit very good heat exchange capability. The present Ni48.1Co2.9Mn35.0In14.0 microwire shows great potential for magnetic refrigeration. This study may stimulate further development of high-performance magnetocaloric wires for high-efficiency and environmentally friendly solid-state cooling.

  18. Bim controls IL-15 availability and limits engagement of multiple BH3-only proteins

    PubMed Central

    Kurtulus, S; Sholl, A; Toe, J; Tripathi, P; Raynor, J; Li, K-P; Pellegrini, M; Hildeman, D A

    2015-01-01

    During the effector CD8+ T-cell response, transcriptional differentiation programs are engaged that promote effector T cells with varying memory potential. Although these differentiation programs have been used to explain which cells die as effectors and which cells survive and become memory cells, it is unclear if the lack of cell death enhances memory. Here, we investigated effector CD8+ T-cell fate in mice whose death program has been largely disabled because of the loss of Bim. Interestingly, the absence of Bim resulted in a significant enhancement of effector CD8+ T cells with more memory potential. Bim-driven control of memory T-cell development required T-cell-specific, but not dendritic cell-specific, expression of Bim. Both total and T-cell-specific loss of Bim promoted skewing toward memory precursors, by enhancing the survival of memory precursors, and limiting the availability of IL-15. Decreased IL-15 availability in Bim-deficient mice facilitated the elimination of cells with less memory potential via the additional pro-apoptotic molecules Noxa and Puma. Combined, these data show that Bim controls memory development by limiting the survival of pre-memory effector cells. Further, by preventing the consumption of IL-15, Bim limits the role of Noxa and Puma in causing the death of effector cells with less memory potential. PMID:25124553

  19. Bim controls IL-15 availability and limits engagement of multiple BH3-only proteins.

    PubMed

    Kurtulus, S; Sholl, A; Toe, J; Tripathi, P; Raynor, J; Li, K-P; Pellegrini, M; Hildeman, D A

    2015-01-01

    During the effector CD8+ T-cell response, transcriptional differentiation programs are engaged that promote effector T cells with varying memory potential. Although these differentiation programs have been used to explain which cells die as effectors and which cells survive and become memory cells, it is unclear if the lack of cell death enhances memory. Here, we investigated effector CD8+ T-cell fate in mice whose death program has been largely disabled because of the loss of Bim. Interestingly, the absence of Bim resulted in a significant enhancement of effector CD8+ T cells with more memory potential. Bim-driven control of memory T-cell development required T-cell-specific, but not dendritic cell-specific, expression of Bim. Both total and T-cell-specific loss of Bim promoted skewing toward memory precursors, by enhancing the survival of memory precursors, and limiting the availability of IL-15. Decreased IL-15 availability in Bim-deficient mice facilitated the elimination of cells with less memory potential via the additional pro-apoptotic molecules Noxa and Puma. Combined, these data show that Bim controls memory development by limiting the survival of pre-memory effector cells. Further, by preventing the consumption of IL-15, Bim limits the role of Noxa and Puma in causing the death of effector cells with less memory potential.

  20. Episodic Memories and Their Relevance for Psychoactive Drug Use and Addiction

    PubMed Central

    Müller, Christian P.

    2013-01-01

    The majority of adult people in western societies regularly consume psychoactive drugs. While this consumption is integrated in everyday life activities and controlled in most consumers, it may escalate and result in drug addiction. Non-addicted drug use requires the systematic establishment of highly organized behaviors, such as drug-seeking and -taking. While a significant role for classical and instrumental learning processes is well established in drug use and abuse, declarative drug memories have largely been neglected in research. Episodic memories are an important part of the declarative memories. Here a role of episodic drug memories in the establishment of non-addicted drug use and its transition to addiction is suggested. In relation to psychoactive drug consumption, episodic drug memories are formed when a person prepares for consumption, when the drug is consumed and, most important, when acute effects, withdrawal, craving, and relapse are experienced. Episodic drug memories are one-trial memories with emotional components that can be much stronger than “normal” episodic memories. Their establishment coincides with drug-induced neuronal activation and plasticity. These memories may be highly extinction resistant and influence psychoactive drug consumption, in particular during initial establishment and at the transition to “drug instrumentalization.” In that, understanding how addictive drugs interact with episodic memory circuits in the brain may provide crucial information for how drug use and addiction are established. PMID:23734106

  1. Episodic memories and their relevance for psychoactive drug use and addiction.

    PubMed

    Müller, Christian P

    2013-01-01

    The majority of adult people in western societies regularly consume psychoactive drugs. While this consumption is integrated in everyday life activities and controlled in most consumers, it may escalate and result in drug addiction. Non-addicted drug use requires the systematic establishment of highly organized behaviors, such as drug-seeking and -taking. While a significant role for classical and instrumental learning processes is well established in drug use and abuse, declarative drug memories have largely been neglected in research. Episodic memories are an important part of the declarative memories. Here a role of episodic drug memories in the establishment of non-addicted drug use and its transition to addiction is suggested. In relation to psychoactive drug consumption, episodic drug memories are formed when a person prepares for consumption, when the drug is consumed and, most important, when acute effects, withdrawal, craving, and relapse are experienced. Episodic drug memories are one-trial memories with emotional components that can be much stronger than "normal" episodic memories. Their establishment coincides with drug-induced neuronal activation and plasticity. These memories may be highly extinction resistant and influence psychoactive drug consumption, in particular during initial establishment and at the transition to "drug instrumentalization." In that, understanding how addictive drugs interact with episodic memory circuits in the brain may provide crucial information for how drug use and addiction are established.

  2. Insights from neuropsychology: pinpointing the role of the posterior parietal cortex in episodic and working memory

    PubMed Central

    Berryhill, Marian E.

    2012-01-01

    The role of posterior parietal cortex (PPC) in various forms of memory is a current topic of interest in the broader field of cognitive neuroscience. This large cortical region has been linked with a wide range of mnemonic functions affecting each stage of memory processing: encoding, maintenance, and retrieval. Yet, the precise role of the PPC in memory remains mysterious and controversial. Progress in understanding PPC function will require researchers to incorporate findings in a convergent manner from multiple experimental techniques rather than emphasizing a particular type of data. To facilitate this process, here, we review findings from the human neuropsychological research and examine the consequences to memory following PPC damage. Recent patient-based research findings have investigated two typically disconnected fields: working memory (WM) and episodic memory. The findings from patient participants with unilateral and bilateral PPC lesions performing diverse experimental paradigms are summarized. These findings are then related to findings from other techniques including neurostimulation (TMS and tDCS) and the influential and more abundant functional neuroimaging literature. We then review the strengths and weaknesses of hypotheses proposed to account for PPC function in these forms of memory. Finally, we address what missing evidence is needed to clarify the role(s) of the PPC in memory. PMID:22701406

  3. Visual Memory in Post-Anterior Right Temporal Lobectomy Patients and Adult Normative Data for the Brown Location Test (BLT)

    PubMed Central

    Brown, Franklin C.; Tuttle, Erin; Westerveld, Michael; Ferraro, F. Richard; Chmielowiec, Teresa; Vandemore, Michelle; Gibson-Beverly, Gina; Bemus, Lisa; Roth, Robert M.; Blumenfeld, Hal; Spencer, Dennis D.; Spencer, Susan S

    2010-01-01

    Several large and meta-analytic studies have failed to support a consistent relationship between visual or “nonverbal” memory deficits and right mesial temporal lobe changes. However, the Brown Location Test (BLT) is a recently developed dot location learning and memory test that uses a nonsymmetrical array and provides control over many of the confounding variables (e.g., verbal influence and drawing requirements) inherent in other measures of visual memory. In the present investigation, we evaluated the clinical utility of the BLT in patients who had undergone left or right anterior mesial temporal lobectomies. We also provide adult normative data of 298 healthy adults in order to provide standardized scores. Results revealed significantly worse performance on the BLT in the right as compared to left lobectomy group and the healthy adult normative sample. The present findings support a role for the right anterior-mesial temporal lobe in dot location learning and memory. PMID:20056493

  4. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, R; Stolken, J; Jannetti, C

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numericalmore » simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.« less

  5. Progress towards broadband Raman quantum memory in Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Saglamyurek, Erhan; Hrushevskyi, Taras; Smith, Benjamin; Leblanc, Lindsay

    2017-04-01

    Optical quantum memories are building blocks for quantum information technologies. Efficient and long-lived storage in combination with high-speed (broadband) operation are key features required for practical applications. While the realization has been a great challenge, Raman memory in Bose-Einstein condensates (BECs) is a promising approach, due to negligible decoherence from diffusion and collisions that leads to seconds-scale memory times, high efficiency due to large atomic density, the possibility for atom-chip integration with micro photonics, and the suitability of the far off-resonant Raman approach with storage of broadband photons (over GHz) [5]. Here we report our progress towards Raman memory in a BEC. We describe our apparatus recently built for producing BEC with 87Rb atoms, and present the observation of nearly pure BEC with 5x105 atoms at 40 nK. After showing our initial characterizations, we discuss the suitability of our system for Raman-based light storage in our BEC.

  6. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  7. Ultra-High Density Holographic Memory Module with Solid-State Architecture

    NASA Technical Reports Server (NTRS)

    Markov, Vladimir B.

    2000-01-01

    NASA's terrestrial. space, and deep-space missions require technology that allows storing. retrieving, and processing a large volume of information. Holographic memory offers high-density data storage with parallel access and high throughput. Several methods exist for data multiplexing based on the fundamental principles of volume hologram selectivity. We recently demonstrated that a spatial (amplitude-phase) encoding of the reference wave (SERW) looks promising as a way to increase the storage density. The SERW hologram offers a method other than traditional methods of selectivity, such as spatial de-correlation between recorded and reconstruction fields, In this report we present the experimental results of the SERW-hologram memory module with solid-state architecture, which is of particular interest for space operations.

  8. Exploiting short-term memory in soft body dynamics as a computational resource

    PubMed Central

    Nakajima, K.; Li, T.; Hauser, H.; Pfeifer, R.

    2014-01-01

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. PMID:25185579

  9. Role of Prefrontal Persistent Activity in Working Memory

    PubMed Central

    Riley, Mitchell R.; Constantinidis, Christos

    2016-01-01

    The prefrontal cortex is activated during working memory, as evidenced by fMRI results in human studies and neurophysiological recordings in animal models. Persistent activity during the delay period of working memory tasks, after the offset of stimuli that subjects are required to remember, has traditionally been thought of as the neural correlate of working memory. In the last few years several findings have cast doubt on the role of this activity. By some accounts, activity in other brain areas, such as the primary visual and posterior parietal cortex, is a better predictor of information maintained in visual working memory and working memory performance; dynamic patterns of activity may convey information without requiring persistent activity at all; and prefrontal neurons may be ill-suited to represent non-spatial information about the features and identity of remembered stimuli. Alternative interpretations about the role of the prefrontal cortex have thus been suggested, such as that it provides a top-down control of information represented in other brain areas, rather than maintaining a working memory trace itself. Here we review evidence for and against the role of prefrontal persistent activity, with a focus on visual neurophysiology. We show that persistent activity predicts behavioral parameters precisely in working memory tasks. We illustrate that prefrontal cortex represents features of stimuli other than their spatial location, and that this information is largely absent from early cortical areas during working memory. We examine memory models not dependent on persistent activity, and conclude that each of those models could mediate only a limited range of memory-dependent behaviors. We review activity decoded from brain areas other than the prefrontal cortex during working memory and demonstrate that these areas alone cannot mediate working memory maintenance, particularly in the presence of distractors. We finally discuss the discrepancy between BOLD activation and spiking activity findings, and point out that fMRI methods do not currently have the spatial resolution necessary to decode information within the prefrontal cortex, which is likely organized at the micrometer scale. Therefore, we make the case that prefrontal persistent activity is both necessary and sufficient for the maintenance of information in working memory. PMID:26778980

  10. Flexible language constructs for large parallel programs

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Schnabel, Robert

    1993-01-01

    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.

  11. Real Time Large Memory Optical Pattern Recognition.

    DTIC Science & Technology

    1984-06-01

    AD-Ri58 023 REAL TIME LARGE MEMORY OPTICAL PATTERN RECOGNITION(U) - h ARMY MISSILE COMMAND REDSTONE ARSENAL AL RESEARCH DIRECTORATE D A GREGORY JUN...TECHNICAL REPORT RR-84-9 Ln REAL TIME LARGE MEMORY OPTICAL PATTERN RECOGNITION Don A. Gregory Research Directorate US Army Missile Laboratory JUNE 1984 L...RR-84-9 , ___/_ _ __ _ __ _ __ _ __"__ _ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Real Time Large Memory Optical Pattern Technical

  12. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs).

    PubMed

    Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo

    2018-02-01

    Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.

  13. On-chip frame memory reduction using a high-compression-ratio codec in the overdrives of liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha

    2010-11-01

    Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.

  14. Inductive reasoning and implicit memory: evidence from intact and impaired memory systems.

    PubMed

    Girelli, Luisa; Semenza, Carlo; Delazer, Margarete

    2004-01-01

    In this study, we modified a classic problem solving task, number series completion, in order to explore the contribution of implicit memory to inductive reasoning. Participants were required to complete number series sharing the same underlying algorithm (e.g., +2), differing in both constituent elements (e.g., 2468 versus 57911) and correct answers (e.g., 10 versus 13). In Experiment 1, reliable priming effects emerged, whether primes and targets were separated by four or ten fillers. Experiment 2 provided direct evidence that the observed facilitation arises at central stages of problem solving, namely the identification of the algorithm and its subsequent extrapolation. The observation of analogous priming effects in a severely amnesic patient strongly supports the hypothesis that the facilitation in number series completion was largely determined by implicit memory processes. These findings demonstrate that the influence of implicit processes extends to higher level cognitive domain such as induction reasoning.

  15. Memory Retrieval in Mice and Men

    PubMed Central

    Ben-Yakov, Aya; Dudai, Yadin; Mayford, Mark R.

    2015-01-01

    Retrieval, the use of learned information, was until recently mostly terra incognita in the neurobiology of memory, owing to shortage of research methods with the spatiotemporal resolution required to identify and dissect fast reactivation or reconstruction of complex memories in the mammalian brain. The development of novel paradigms, model systems, and new tools in molecular genetics, electrophysiology, optogenetics, in situ microscopy, and functional imaging, have contributed markedly in recent years to our ability to investigate brain mechanisms of retrieval. We review selected developments in the study of explicit retrieval in the rodent and human brain. The picture that emerges is that retrieval involves coordinated fast interplay of sparse and distributed corticohippocampal and neocortical networks that may permit permutational binding of representational elements to yield specific representations. These representations are driven largely by the activity patterns shaped during encoding, but are malleable, subject to the influence of time and interaction of the existing memory with novel information. PMID:26438596

  16. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Investigation of Hafnium oxide/Copper resistive memory for advanced encryption applications

    NASA Astrophysics Data System (ADS)

    Briggs, Benjamin D.

    The Advanced Encryption Standard (AES) is a widely used encryption algorithm to protect data and communications in today's digital age. Modern AES CMOS implementations require large amounts of dedicated logic and must be tuned for either performance or power consumption. A high throughput, low power, and low die area AES implementation is required in the growing mobile sector. An emerging non-volatile memory device known as resistive memory (ReRAM) is a simple metal-insulator-metal capacitor device structure with the ability to switch between two stable resistance states. Currently, ReRAM is targeted as a non-volatile memory replacement technology to eventually replace flash. Its advantages over flash include ease of fabrication, speed, and lower power consumption. In addition to memory, ReRAM can also be used in advanced logic implementations given its purely resistive behavior. The combination of a new non-volatile memory element ReRAM along with high performance, low power CMOS opens new avenues for logic implementations. This dissertation will cover the design and process implementation of a ReRAM-CMOS hybrid circuit, built using IBM's 10LPe process, for the improvement of hardware AES implementations. Further the device characteristics of ReRAM, specifically the HfO2/Cu memory system, and mechanisms for operation are not fully correlated. Of particular interest to this work is the role of material properties such as the stoichiometry, crystallinity, and doping of the HfO2 layer and their effect on the switching characteristics of resistive memory. Material properties were varied by a combination of atomic layer deposition and reactive sputtering of the HfO2 layer. Several studies will be discussed on how the above mentioned material properties influence switching parameters, and change the underlying physics of device operation.

  18. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  19. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  20. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  1. A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain.

    PubMed

    Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel

    2016-04-01

    Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.

  2. Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing

    NASA Astrophysics Data System (ADS)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Wang, Feng

    2018-04-01

    Fog computing requires a large main memory capacity to decrease latency and increase the Quality of Service (QoS). However, dynamic random access memory (DRAM), the commonly used random access memory, cannot be included into a fog computing system due to its high consumption of power. In recent years, non-volatile memories (NVM) such as Phase-Change Memory (PCM) and Spin-transfer torque RAM (STT-RAM) with their low power consumption have emerged to replace DRAM. Moreover, the currently proposed hybrid main memory, consisting of both DRAM and NVM, have shown promising advantages in terms of scalability and power consumption. However, the drawbacks of NVM, such as long read/write latency give rise to potential problems leading to asymmetric cache misses in the hybrid main memory. Current last level cache (LLC) policies are based on the unified miss cost, and result in poor performance in LLC and add to the cost of using NVM. In order to minimize the cache miss cost in the hybrid main memory, we propose a cost aware cache replacement policy (CACRP) that reduces the number of cache misses from NVM and improves the cache performance for a hybrid memory system. Experimental results show that our CACRP behaves better in LLC performance, improving performance up to 43.6% (15.5% on average) compared to LRU.

  3. Effector functions of memory CTLs can be affected by signals received during reactivation.

    PubMed

    Lv, Yingjun; Mattson, Elliot; Bhadurihauck, Anjuli; Garcia, Karla; Li, Lei; Xiao, Zhengguo

    2017-08-01

    Memory cytotoxic T lymphocytes (CTLs) are able to provide protections to the host against repeated insults from intracellular pathogens. However, it has not been completely understood how the effector functions of memory CTLs are induced upon antigen challenge, which is directly related to the efficacy of their protection. Third signal cytokines, such as IL-12 and type I interferon, have been suggested to be involved in the protective function of memory CTLs, but direct evidence is warranted. In this report, we found that memory CTLs need to be reactivated to exert effector functions. Infusion of a large population of quiescent memory CTLs did not lead to cancer control in tumor-bearing mice, whereas infusion of a reactivated memory CTL population did. This reactivation of memory CTLs requires cytokines such as IL-12 in addition to antigen but was less dependent upon costimulation and IL-2 compared to naive CTLs. Memory CTLs responded more quickly and with greater strength than their naive counterparts upon stimulation, which is associated with higher upregulation of important transcription factors such as T-bet and phosphorylated STAT4. In addition, memory CTLs underwent less expansion than naive CTLs upon pathogen challenge. In conclusion, effector functions of established memory CTLs may be affected by certain cytokines such as IL-12 and type I IFN. Thus, a pathogen's ability to induce cytokines could contribute to the efficacy of protection of an established memory CTL population.

  4. GenomicTools: a computational platform for developing high-throughput analytics in genomics.

    PubMed

    Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo

    2012-01-15

    Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.

  5. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  6. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  7. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  8. Short-term depression and transient memory in sensory cortex.

    PubMed

    Gillary, Grant; Heydt, Rüdiger von der; Niebur, Ernst

    2017-12-01

    Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.

  9. Translational Approaches Targeting Reconsolidation

    PubMed Central

    Kroes, Marijn C.W.; LeDoux, Joseph E.; Phelps, Elizabeth A.

    2017-01-01

    Maladaptive learned responses and memories contribute to psychiatric disorders that constitute a significant socio-economic burden. Primary treatment methods teach patients to inhibit maladaptive responses, but do not get rid of the memory itself, which explains why many patients experience a return of symptoms even after initially successful treatment. This highlights the need to discover more persistent and robust techniques to diminish maladaptive learned behaviours. One potentially promising approach is to alter the original memory, as opposed to inhibiting it, by targeting memory reconsolidation. Recent research shows that reactivating an old memory results in a period of memory flexibility and requires restorage, or reconsolidation, for the memory to persist. This reconsolidation period allows a window for modification of a specific old memory. Renewal of memory flexibility following reactivation holds great clinical potential as it enables targeting reconsolidation and changing of specific learned responses and memories that contribute to maladaptive mental states and behaviours. Here, we will review translational research on non-human animals, healthy human subjects, and clinical populations aimed at altering memories by targeting reconsolidation using biological treatments (electrical stimulation, noradrenergic antagonists) or behavioural interference (reactivation–extinction paradigm). Both approaches have been used successfully to modify aversive and appetitive memories, yet effectiveness in treating clinical populations has been limited. We will discuss that memory flexibility depends on the type of memory tested and the brain regions that underlie specific types of memory. Further, when and how we can most effectively reactivate a memory and induce flexibility is largely unclear. Finally, the development of drugs that can target reconsolidation and are safe for use in humans would optimize cross-species translations. Increasing the understanding of the mechanism and limitations of memory flexibility upon reactivation should help optimize efficacy of treatments for psychiatric patients. PMID:27240676

  10. Computational aerodynamics development and outlook /Dryden Lecture in Research for 1979/

    NASA Technical Reports Server (NTRS)

    Chapman, D. R.

    1979-01-01

    Some past developments and current examples of computational aerodynamics are briefly reviewed. An assessment is made of the requirements on future computer memory and speed imposed by advanced numerical simulations, giving emphasis to the Reynolds averaged Navier-Stokes equations and to turbulent eddy simulations. Experimental scales of turbulence structure are used to determine the mesh spacings required to adequately resolve turbulent energy and shear. Assessment also is made of the changing market environment for developing future large computers, and of the projections of micro-electronics memory and logic technology that affect future computer capability. From the two assessments, estimates are formed of the future time scale in which various advanced types of aerodynamic flow simulations could become feasible. Areas of research judged especially relevant to future developments are noted.

  11. Topological computation based on direct magnetic logic communication.

    PubMed

    Zhang, Shilei; Baker, Alexander A; Komineas, Stavros; Hesjedal, Thorsten

    2015-10-28

    Non-uniform magnetic domains with non-trivial topology, such as vortices and skyrmions, are proposed as superior state variables for nonvolatile information storage. So far, the possibility of logic operations using topological objects has not been considered. Here, we demonstrate numerically that the topology of the system plays a significant role for its dynamics, using the example of vortex-antivortex pairs in a planar ferromagnetic film. Utilising the dynamical properties and geometrical confinement, direct logic communication between the topological memory carriers is realised. This way, no additional magnetic-to-electrical conversion is required. More importantly, the information carriers can spontaneously travel up to ~300 nm, for which no spin-polarised current is required. The derived logic scheme enables topological spintronics, which can be integrated into large-scale memory and logic networks capable of complex computations.

  12. Illusory expectations can affect retrieval-monitoring accuracy.

    PubMed

    McDonough, Ian M; Gallo, David A

    2012-03-01

    The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, L.C.

    The Prickett and Lonnquist two-dimensional groundwater model has been programmed for the Apple II minicomputer. Both leaky and nonleaky confined aquifers can be simulated. The model was adapted from the FORTRAN version of Prickett and Lonnquist. In the configuration presented here, the program requires 64 K bits of memory. Because of the large number of arrays used in the program, and memory limitations of the Apple II, the maximum grid size that can be used is 20 rows by 20 columns. Input to the program is interactive, with prompting by the computer. Output consists of predicted lead values at themore » row-column intersections (nodes).« less

  14. General Recommendations on Fatigue Risk Management for the Canadian Forces

    DTIC Science & Technology

    2010-04-01

    missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share

  15. Structural synaptic plasticity in the hippocampus induced by spatial experience and its implications in information processing.

    PubMed

    Carasatorre, M; Ramírez-Amaya, V; Díaz Cintra, S

    2016-10-01

    Long-lasting memory formation requires that groups of neurons processing new information develop the ability to reproduce the patterns of neural activity acquired by experience. Changes in synaptic efficiency let neurons organise to form ensembles that repeat certain activity patterns again and again. Among other changes in synaptic plasticity, structural modifications tend to be long-lasting which suggests that they underlie long-term memory. There is a large body of evidence supporting that experience promotes changes in the synaptic structure, particularly in the hippocampus. Structural changes to the hippocampus may be functionally implicated in stabilising acquired memories and encoding new information. Copyright © 2012 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. The mere exposure effect and recognition depend on the way you look!

    PubMed

    Willems, Sylvie; Dedonder, Jonathan; Van der Linden, Martial

    2010-01-01

    In line with Whittlesea and Price (2001), we investigated whether the memory effect measured with an implicit memory paradigm (mere exposure effect) and an explicit recognition task depended on perceptual processing strategies, regardless of whether the task required intentional retrieval. We found that manipulation intended to prompt functional implicit-explicit dissociation no longer had a differential effect when we induced similar perceptual strategies in both tasks. Indeed, the results showed that prompting a nonanalytic strategy ensured performance above chance on both tasks. Conversely, inducing an analytic strategy drastically decreased both explicit and implicit performance. Furthermore, we noted that the nonanalytic strategy involved less extensive gaze scanning than the analytic strategy and that memory effects under this processing strategy were largely independent of gaze movement.

  17. De Novo mRNA Synthesis Is Required for Both Consolidation and Reconsolidation of Fear Memories in the Amygdala

    ERIC Educational Resources Information Center

    Duvarci, Sevil; Nader, Karim; LeDoux, Joseph E.

    2008-01-01

    Memory consolidation is the process by which newly learned information is stabilized into long-term memory (LTM). Considerable evidence indicates that retrieval of a consolidated memory returns it to a labile state that requires it to be restabilized. Consolidation of new fear memories has been shown to require de novo RNA and protein synthesis in…

  18. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  19. Feedback associated with expectation for larger-reward improves visuospatial working memory performances in children with ADHD.

    PubMed

    Hammer, Rubi; Tennekoon, Michael; Cooke, Gillian E; Gayda, Jessica; Stein, Mark A; Booth, James R

    2015-08-01

    We tested the interactive effect of feedback and reward on visuospatial working memory in children with ADHD. Seventeen boys with ADHD and 17 Normal Control (NC) boys underwent functional magnetic resonance imaging (fMRI) while performing four visuospatial 2-back tasks that required monitoring the spatial location of letters presented on a display. Tasks varied in reward size (large; small) and feedback availability (no-feedback; feedback). While the performance of NC boys was high in all conditions, boys with ADHD exhibited higher performance (similar to those of NC boys) only when they received feedback associated with large-reward. Performance pattern in both groups was mirrored by neural activity in an executive function neural network comprised of few distinct frontal brain regions. Specifically, neural activity in the left and right middle frontal gyri of boys with ADHD became normal-like only when feedback was available, mainly when feedback was associated with large-reward. When feedback was associated with small-reward, or when large-reward was expected but feedback was not available, boys with ADHD exhibited altered neural activity in the medial orbitofrontal cortex and anterior insula. This suggests that contextual support normalizes activity in executive brain regions in children with ADHD, which results in improved working memory. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    PubMed

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  1. Quantum teleportation between remote atomic-ensemble quantum memories

    PubMed Central

    Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei

    2012-01-01

    Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222

  2. An efficient photogrammetric stereo matching method for high-resolution images

    NASA Astrophysics Data System (ADS)

    Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao

    2016-12-01

    Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.

  3. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  4. A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Capolino, F; Wilton, D

    2005-02-02

    A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less

  5. LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly.

    PubMed

    Bonizzoni, Paola; Vedova, Gianluca Della; Pirola, Yuri; Previtali, Marco; Rizzi, Raffaella

    2016-03-01

    The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows-Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012 ), a state-of-the-art string graph-based assembler, and uses BEETL for indexing the input data. LSG is open source software and is available online. We have analyzed our implementation on a 875-million read whole-genome dataset, on which LSG has built the string graph using only 1GB of main memory (reducing the memory occupation by a factor of 50 with respect to SGA), while requiring slightly more than twice the time than SGA. The analysis of the entire pipeline shows an important decrease in memory usage, while managing to have only a moderate increase in the running time.

  6. Large conditional single-photon cross-phase modulation

    NASA Astrophysics Data System (ADS)

    Beck, Kristin; Hosseini, Mahdi; Duan, Yiheng; Vuletic, Vladan

    2016-05-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of up to π / 3 between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. With a moderate improvement in cavity finesse, our system can reach a coherent phase shift of p at low loss, enabling deterministic and universal photonic quantum logic. Preprint: arXiv:1512.02166 [quant-ph

  7. Evidence against decay in verbal working memory.

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2013-05-01

    The article tests the assumption that forgetting in working memory for verbal materials is caused by time-based decay, using the complex-span paradigm. Participants encoded 6 letters for serial recall; each letter was preceded and followed by a processing period comprising 4 trials of difficult visual search. Processing duration, during which memory could decay, was manipulated via search set size. This manipulation increased retention interval by up to 100% without having any effect on recall accuracy. This result held with and without articulatory suppression. Two experiments using a dual-task paradigm showed that the visual search process required central attention. Thus, even when memory maintenance by central attention and by articulatory rehearsal was prevented, a large delay had no effect on memory performance, contrary to the decay notion. Most previous experiments that manipulated the retention interval and the opportunity for maintenance processes in complex span have confounded these variables with time pressure during processing periods. Three further experiments identified time pressure as the variable that affected recall. We conclude that time-based decay does not contribute to the capacity limit of verbal working memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  8. Understanding the function of visual short-term memory: transsaccadic memory, object correspondence, and gaze correction.

    PubMed

    Hollingworth, Andrew; Richard, Ashleigh M; Luck, Steven J

    2008-02-01

    Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here, the authors demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. The authors hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a task faced by the visual system thousands of times each day. In 4 experiments, memory-based gaze correction was accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load interfered with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate that VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  9. Method for prefetching non-contiguous data structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  10. Human Stem Cell-like Memory T Cells Are Maintained in a State of Dynamic Flux.

    PubMed

    Ahmed, Raya; Roger, Laureline; Costa Del Amo, Pedro; Miners, Kelly L; Jones, Rhiannon E; Boelen, Lies; Fali, Tinhinane; Elemans, Marjet; Zhang, Yan; Appay, Victor; Baird, Duncan M; Asquith, Becca; Price, David A; Macallan, Derek C; Ladell, Kristin

    2016-12-13

    Adaptive immunity requires the generation of memory T cells from naive precursors selected in the thymus. The key intermediaries in this process are stem cell-like memory T (T SCM ) cells, multipotent progenitors that can both self-renew and replenish more differentiated subsets of memory T cells. In theory, antigen specificity within the T SCM pool may be imprinted statically as a function of largely dormant cells and/or retained dynamically by more transitory subpopulations. To explore the origins of immunological memory, we measured the turnover of T SCM cells in vivo using stable isotope labeling with heavy water. The data indicate that T SCM cells in both young and elderly subjects are maintained by ongoing proliferation. In line with this finding, T SCM cells displayed limited telomere length erosion coupled with high expression levels of active telomerase and Ki67. Collectively, these observations show that T SCM cells exist in a state of perpetual flux throughout the human lifespan. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. FoxO6 regulates memory consolidation and synaptic function

    PubMed Central

    Salih, Dervis A.M.; Rashid, Asim J.; Colas, Damien; de la Torre-Ubieta, Luis; Zhu, Ruo P.; Morgan, Alexander A.; Santo, Evan E.; Ucar, Duygu; Devarajan, Keerthana; Cole, Christina J.; Madison, Daniel V.; Shamloo, Mehrdad; Butte, Atul J.; Bonni, Azad; Josselyn, Sheena A.; Brunet, Anne

    2012-01-01

    The FoxO family of transcription factors is known to slow aging downstream from the insulin/IGF (insulin-like growth factor) signaling pathway. The most recently discovered FoxO isoform in mammals, FoxO6, is highly enriched in the adult hippocampus. However, the importance of FoxO factors in cognition is largely unknown. Here we generated mice lacking FoxO6 and found that these mice display normal learning but impaired memory consolidation in contextual fear conditioning and novel object recognition. Using stereotactic injection of viruses into the hippocampus of adult wild-type mice, we found that FoxO6 activity in the adult hippocampus is required for memory consolidation. Genome-wide approaches revealed that FoxO6 regulates a program of genes involved in synaptic function upon learning in the hippocampus. Consistently, FoxO6 deficiency results in decreased dendritic spine density in hippocampal neurons in vitro and in vivo. Thus, FoxO6 may promote memory consolidation by regulating a program coordinating neuronal connectivity in the hippocampus, which could have important implications for physiological and pathological age-dependent decline in memory. PMID:23222102

  12. Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.

    PubMed

    Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin

    2015-01-01

    Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.

  13. Satb2 determines miRNA expression and long-term memory in the adult central nervous system.

    PubMed

    Jaitner, Clemens; Reddy, Chethan; Abentung, Andreas; Whittle, Nigel; Rieder, Dietmar; Delekate, Andrea; Korte, Martin; Jain, Gaurav; Fischer, Andre; Sananbenesi, Farahnaz; Cera, Isabella; Singewald, Nicolas; Dechant, Georg; Apostolova, Galina

    2016-11-29

    SATB2 is a risk locus for schizophrenia and encodes a DNA-binding protein that regulates higher-order chromatin configuration. In the adult brain Satb2 is almost exclusively expressed in pyramidal neurons of two brain regions important for memory formation, the cerebral cortex and the CA1-hippocampal field. Here we show that Satb2 is required for key hippocampal functions since deletion of Satb2 from the adult mouse forebrain prevents the stabilization of synaptic long-term potentiation and markedly impairs long-term fear and object discrimination memory. At the molecular level, we find that synaptic activity and BDNF up-regulate Satb2, which itself binds to the promoters of coding and non-coding genes. Satb2 controls the hippocampal levels of a large cohort of miRNAs, many of which are implicated in synaptic plasticity and memory formation. Together, our findings demonstrate that Satb2 is critically involved in long-term plasticity processes in the adult forebrain that underlie the consolidation and stabilization of context-linked memory.

  14. shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics.

    PubMed

    Khomtchouk, Bohdan B; Hennessy, James R; Wahlestedt, Claes

    2017-01-01

    Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105-107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.

  15. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions

    PubMed Central

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-01-01

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481

  16. Highly Stretchable Non-volatile Nylon Thread Memory

    NASA Astrophysics Data System (ADS)

    Kang, Ting-Kuo

    2016-04-01

    Integration of electronic elements into textiles, to afford e-textiles, can provide an ideal platform for the development of lightweight, thin, flexible, and stretchable e-textiles. This approach will enable us to meet the demands of the rapidly growing market of wearable-electronics on arbitrary non-conventional substrates. However the actual integration of the e-textiles that undergo mechanical deformations during both assembly and daily wear or satisfy the requirements of the low-end applications, remains a challenge. Resistive memory elements can also be fabricated onto a nylon thread (NT) for e-textile applications. In this study, a simple dip-and-dry process using graphene-PEDOT:PSS (poly(3,4-ethylenedioxythiophene) polystyrene sulfonate) ink is proposed for the fabrication of a highly stretchable non-volatile NT memory. The NT memory appears to have typical write-once-read-many-times characteristics. The results show that an ON/OFF ratio of approximately 103 is maintained for a retention time of 106 s. Furthermore, a highly stretchable strain and a long-term digital-storage capability of the ON-OFF-ON states are demonstrated in the NT memory. The actual integration of the knitted NT memories into textiles will enable new design possibilities for low-cost and large-area e-textile memory applications.

  17. N-ras couples antigen receptor signaling to Eomesodermin and to functional CD8+ T cell memory but not to effector differentiation

    PubMed Central

    Iborra, Salvador; Ramos, Manuel; Arana, David M.; Lázaro, Silvia; Aguilar, Francisco; Santos, Eugenio; López, Daniel

    2013-01-01

    Signals from the TCR that specifically contribute to effector versus memory CD8+ T cell differentiation are poorly understood. Using mice and adoptively transferred T lymphocytes lacking the small GTPase N-ras, we found that N-ras–deficient CD8+ T cells differentiate efficiently into antiviral primary effectors but have a severe defect in generating protective memory cells. This defect was rescued, although only partly, by rapamycin-mediated inhibition of mammalian target of rapamycin (mTOR) in vivo. The memory defect correlated with a marked impairment in vitro and in vivo of the antigen-mediated early induction of T-box transcription factor Eomesodermin (Eomes), whereas T-bet was unaffected. Besides N-ras, early Eomes induction in vitro required phosphoinositide 3-kinase (PI3K)–AKT but not extracellular signal-regulated kinase (ERK) activation, and it was largely insensitive to rapamycin. Consistent with N-ras coupling Eomes to T cell memory, retrovirally enforced expression of Eomes in N-ras–deficient CD8+ T cells effectively rescued their memory differentiation. Thus, our study identifies a critical role for N-ras as a TCR-proximal regulator of Eomes for early determination of the CD8+ T cell memory fate. PMID:23776078

  18. [The consolidation of memory, one century on].

    PubMed

    Prado-Alcala, R A; Quirarte, G L

    The theory of memory consolidation, based on the work published by Georg Elias Muller and Alfons Pilzecker over a century ago, continues to guide research into the neurobiology of memory, either directly or indirectly. In their classic monographic work, they concluded that fixing memory requires the passage of time (consolidation) and that memory is vulnerable during this period of consolidation, as symptoms of amnesia appear when brain functioning is interfered with before the consolidation process is completed. Most of the experimental data concerning this phenomenon strongly support the theory. In this article we present a review of experiments that have made it possible to put forward a model that explains the amnesia produced in conventional learning conditions, as well as another model related to the protection of memory when the same instances of learning are submitted to a situation involving intensive training. Findings from relatively recent studies have shown that treatments that typically produce amnesia when they are administered immediately after a learning experience (during the period in which the memory would be consolidating itself) no longer have any effect when the instances of learning involve a relatively large number of trials or training sessions, or relatively high intensity aversive events. These results are not congruent with the prevailing theories about consolidation.

  19. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  20. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  1. Solution of large nonlinear quasistatic structural mechanics problems on distributed-memory multiprocessor computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanford, M.

    1997-12-31

    Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less

  2. Flexible Language Constructs for Large Parallel Programs

    DOE PAGES

    Rosing, Matt; Schnabel, Robert

    1994-01-01

    The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less

  3. The impact of supercomputers on experimentation: A view from a national laboratory

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.; Arnold, J. O.

    1985-01-01

    The relative roles of large scale scientific computers and physical experiments in several science and engineering disciplines are discussed. Increasing dependence on computers is shown to be motivated both by the rapid growth in computer speed and memory, which permits accurate numerical simulation of complex physical phenomena, and by the rapid reduction in the cost of performing a calculation, which makes computation an increasingly attractive complement to experimentation. Computer speed and memory requirements are presented for selected areas of such disciplines as fluid dynamics, aerodynamics, aerothermodynamics, chemistry, atmospheric sciences, astronomy, and astrophysics, together with some examples of the complementary nature of computation and experiment. Finally, the impact of the emerging role of computers in the technical disciplines is discussed in terms of both the requirements for experimentation and the attainment of previously inaccessible information on physical processes.

  4. Integrated information storage and transfer with a coherent magnetic device

    PubMed Central

    Jia, Ning; Banchi, Leonardo; Bayat, Abolfazl; Dong, Guangjiong; Bose, Sougato

    2015-01-01

    Quantum systems are inherently dissipation-less, making them excellent candidates even for classical information processing. We propose to use an array of large-spin quantum magnets for realizing a device which has two modes of operation: memory and data-bus. While the weakly interacting low-energy levels are used as memory to store classical information (bits), the high-energy levels strongly interact with neighboring magnets and mediate the spatial movement of information through quantum dynamics. Despite the fact that memory and data-bus require different features, which are usually prerogative of different physical systems – well isolation for the memory cells, and strong interactions for the transmission – our proposal avoids the notorious complexity of hybrid structures. The proposed mechanism can be realized with different setups. We specifically show that molecular magnets, as the most promising technology, can implement hundreds of operations within their coherence time, while adatoms on surfaces probed by a scanning tunneling microscope is a future possibility. PMID:26347152

  5. Dendritic spine dynamics in synaptogenesis after repeated LTP inductions: Dependence on pre-existing spine density

    PubMed Central

    Oe, Yuki; Tominaga-Yoshino, Keiko; Hasegawa, Sho; Ogura, Akihiko

    2013-01-01

    Not only from our daily experience but from learning experiments in animals, we know that the establishment of long-lasting memory requires repeated practice. However, cellular backgrounds underlying this repetition-dependent consolidation of memory remain largely unclear. We reported previously using organotypic slice cultures of rodent hippocampus that the repeated inductions of LTP (long-term potentiation) lead to a slowly developing long-lasting synaptic enhancement accompanied by synaptogenesis distinct from LTP itself, and proposed this phenomenon as a model system suitable for the analysis of the repetition-dependent consolidation of memory. Here we examined the dynamics of individual dendritic spines after repeated LTP-inductions and found the existence of two phases in the spines' stochastic behavior that eventually lead to the increase in spine density. This spine dynamics occurred preferentially in the dendritic segments having low pre-existing spine density. Our results may provide clues for understanding the cellular bases underlying the repetition-dependent consolidation of memory. PMID:23739837

  6. The search for a hippocampal engram.

    PubMed

    Mayford, Mark

    2014-01-05

    Understanding the molecular and cellular changes that underlie memory, the engram, requires the identification, isolation and manipulation of the neurons involved. This presents a major difficulty for complex forms of memory, for example hippocampus-dependent declarative memory, where the participating neurons are likely to be sparse, anatomically distributed and unique to each individual brain and learning event. In this paper, I discuss several new approaches to this problem. In vivo calcium imaging techniques provide a means of assessing the activity patterns of large numbers of neurons over long periods of time with precise anatomical identification. This provides important insight into how the brain represents complex information and how this is altered with learning. The development of techniques for the genetic modification of neural ensembles based on their natural, sensory-evoked, activity along with optogenetics allows direct tests of the coding function of these ensembles. These approaches provide a new methodological framework in which to examine the mechanisms of complex forms of learning at the level of the neurons involved in a specific memory.

  7. The search for a hippocampal engram

    PubMed Central

    Mayford, Mark

    2014-01-01

    Understanding the molecular and cellular changes that underlie memory, the engram, requires the identification, isolation and manipulation of the neurons involved. This presents a major difficulty for complex forms of memory, for example hippocampus-dependent declarative memory, where the participating neurons are likely to be sparse, anatomically distributed and unique to each individual brain and learning event. In this paper, I discuss several new approaches to this problem. In vivo calcium imaging techniques provide a means of assessing the activity patterns of large numbers of neurons over long periods of time with precise anatomical identification. This provides important insight into how the brain represents complex information and how this is altered with learning. The development of techniques for the genetic modification of neural ensembles based on their natural, sensory-evoked, activity along with optogenetics allows direct tests of the coding function of these ensembles. These approaches provide a new methodological framework in which to examine the mechanisms of complex forms of learning at the level of the neurons involved in a specific memory. PMID:24298162

  8. The Construction of Semantic Memory: Grammar-Based Representations Learned from Relational Episodic Information

    PubMed Central

    Battaglia, Francesco P.; Pennartz, Cyriel M. A.

    2011-01-01

    After acquisition, memories underlie a process of consolidation, making them more resistant to interference and brain injury. Memory consolidation involves systems-level interactions, most importantly between the hippocampus and associated structures, which takes part in the initial encoding of memory, and the neocortex, which supports long-term storage. This dichotomy parallels the contrast between episodic memory (tied to the hippocampal formation), collecting an autobiographical stream of experiences, and semantic memory, a repertoire of facts and statistical regularities about the world, involving the neocortex at large. Experimental evidence points to a gradual transformation of memories, following encoding, from an episodic to a semantic character. This may require an exchange of information between different memory modules during inactive periods. We propose a theory for such interactions and for the formation of semantic memory, in which episodic memory is encoded as relational data. Semantic memory is modeled as a modified stochastic grammar, which learns to parse episodic configurations expressed as an association matrix. The grammar produces tree-like representations of episodes, describing the relationships between its main constituents at multiple levels of categorization, based on its current knowledge of world regularities. These regularities are learned by the grammar from episodic memory information, through an expectation-maximization procedure, analogous to the inside–outside algorithm for stochastic context-free grammars. We propose that a Monte-Carlo sampling version of this algorithm can be mapped on the dynamics of “sleep replay” of previously acquired information in the hippocampus and neocortex. We propose that the model can reproduce several properties of semantic memory such as decontextualization, top-down processing, and creation of schemata. PMID:21887143

  9. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  10. Exploiting short-term memory in soft body dynamics as a computational resource.

    PubMed

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. The neural basis of novelty and appropriateness in processing of creative chunk decomposition.

    PubMed

    Huang, Furong; Fan, Jin; Luo, Jing

    2015-06-01

    Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Cryptography in the Bounded-Quantum-Storage Model

    NASA Astrophysics Data System (ADS)

    Schaffner, Christian

    2007-09-01

    This thesis initiates the study of cryptographic protocols in the bounded-quantum-storage model. On the practical side, simple protocols for Rabin Oblivious Transfer, 1-2 Oblivious Transfer and Bit Commitment are presented. No quantum memory is required for honest players, whereas the protocols can only be broken by an adversary controlling a large amount of quantum memory. The protocols are efficient, non-interactive and can be implemented with today's technology. On the theoretical side, new entropic uncertainty relations involving min-entropy are established and used to prove the security of protocols according to new strong security definitions. For instance, in the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers, the uncertainty relation allows to prove the security of QKD protocols while tolerating considerably higher error rates compared to the standard model with unbounded adversaries.

  13. Development of a shape memory alloy actuator for a robotic eye prosthesis

    NASA Astrophysics Data System (ADS)

    Bunton, T. B. Wolfe; Faulkner, M. G.; Wolfaardt, J.

    2005-08-01

    The quality of life of patients who wear an orbital prosthesis would be vastly improved if their prostheses were also able to execute vertical and horizontal motion. This requires appropriate actuation and control systems to create an intelligent prosthesis. A method of actuation that meets the demanding design criteria is currently not available. The present work considers an activation system that follows a design philosophy of biomimicry, simplicity and space optimization. While several methods of actuation were considered, shape memory alloys were chosen for their high power density, high actuation forces and high displacements. The behaviour of specific shape memory alloys as an actuator was investigated to determine the force obtained, the transformation temperatures and details of the material processing. In addition, a large-scale prototype was constructed to validate the response of the proposed system.

  14. Physical principles and current status of emerging non-volatile solid state memories

    NASA Astrophysics Data System (ADS)

    Wang, L.; Yang, C.-H.; Wen, J.

    2015-07-01

    Today the influence of non-volatile solid-state memories on persons' lives has become more prominent because of their non-volatility, low data latency, and high robustness. As a pioneering technology that is representative of non-volatile solidstate memories, flash memory has recently seen widespread application in many areas ranging from electronic appliances, such as cell phones and digital cameras, to external storage devices such as universal serial bus (USB) memory. Moreover, owing to its large storage capacity, it is expected that in the near future, flash memory will replace hard-disk drives as a dominant technology in the mass storage market, especially because of recently emerging solid-state drives. However, the rapid growth of the global digital data has led to the need for flash memories to have larger storage capacity, thus requiring a further downscaling of the cell size. Such a miniaturization is expected to be extremely difficult because of the well-known scaling limit of flash memories. It is therefore necessary to either explore innovative technologies that can extend the areal density of flash memories beyond the scaling limits, or to vigorously develop alternative non-volatile solid-state memories including ferroelectric random-access memory, magnetoresistive random-access memory, phase-change random-access memory, and resistive random-access memory. In this paper, we review the physical principles of flash memories and their technical challenges that affect our ability to enhance the storage capacity. We then present a detailed discussion of novel technologies that can extend the storage density of flash memories beyond the commonly accepted limits. In each case, we subsequently discuss the physical principles of these new types of non-volatile solid-state memories as well as their respective merits and weakness when utilized for data storage applications. Finally, we predict the future prospects for the aforementioned solid-state memories for the next generation of data-storage devices based on a comparison of their performance. [Figure not available: see fulltext.

  15. Tracking Control of Shape-Memory-Alloy Actuators Based on Self-Sensing Feedback and Inverse Hysteresis Compensation

    PubMed Central

    Liu, Shu-Hung; Huang, Tse-Shih; Yen, Jia-Yush

    2010-01-01

    Shape memory alloys (SMAs) offer a high power-to-weight ratio, large recovery strain, and low driving voltages, and have thus attracted considerable research attention. The difficulty of controlling SMA actuators arises from their highly nonlinear hysteresis and temperature dependence. This paper describes a combination of self-sensing and model-based control, where the model includes both the major and minor hysteresis loops as well as the thermodynamics effects. The self-sensing algorithm uses only the power width modulation (PWM) signal and requires no heavy equipment. The method can achieve high-accuracy servo control and is especially suitable for miniaturized applications. PMID:22315530

  16. Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Vipiana, Francesca

    2017-04-01

    Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.

  17. Communication: Practical and rigorous reduction of the many-electron quantum mechanical Coulomb problem to O(N{sup 2/3}) storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, Mark R., E-mail: mark.pederson@science.doe.gov

    2015-04-14

    It is tacitly accepted that, for practical basis sets consisting of N functions, solution of the two-electron Coulomb problem in quantum mechanics requires storage of O(N{sup 4}) integrals in the small N limit. For localized functions, in the large N limit, or for planewaves, due to closure, the storage can be reduced to O(N{sup 2}) integrals. Here, it is shown that the storage can be further reduced to O(N{sup 2/3}) for separable basis functions. A practical algorithm, that uses standard one-dimensional Gaussian-quadrature sums, is demonstrated. The resulting algorithm allows for the simultaneous storage, or fast reconstruction, of any two-electron Coulombmore » integral required for a many-electron calculation on processors with limited memory and disk space. For example, for calculations involving a basis of 9171 planewaves, the memory required to effectively store all Coulomb integrals decreases from 2.8 Gbytes to less than 2.4 Mbytes.« less

  18. Communication: practical and rigorous reduction of the many-electron quantum mechanical Coulomb problem to O(N(2/3)) storage.

    PubMed

    Pederson, Mark R

    2015-04-14

    It is tacitly accepted that, for practical basis sets consisting of N functions, solution of the two-electron Coulomb problem in quantum mechanics requires storage of O(N(4)) integrals in the small N limit. For localized functions, in the large N limit, or for planewaves, due to closure, the storage can be reduced to O(N(2)) integrals. Here, it is shown that the storage can be further reduced to O(N(2/3)) for separable basis functions. A practical algorithm, that uses standard one-dimensional Gaussian-quadrature sums, is demonstrated. The resulting algorithm allows for the simultaneous storage, or fast reconstruction, of any two-electron Coulomb integral required for a many-electron calculation on processors with limited memory and disk space. For example, for calculations involving a basis of 9171 planewaves, the memory required to effectively store all Coulomb integrals decreases from 2.8 Gbytes to less than 2.4 Mbytes.

  19. Memory scrutinized through electrical brain stimulation: A review of 80 years of experiential phenomena.

    PubMed

    Curot, Jonathan; Busigny, Thomas; Valton, Luc; Denuelle, Marie; Vignal, Jean-Pierre; Maillard, Louis; Chauvel, Patrick; Pariente, Jérémie; Trebuchon, Agnès; Bartolomei, Fabrice; Barbeau, Emmanuel J

    2017-07-01

    Electrical brain stimulations (EBS) sometimes induce reminiscences, but it is largely unknown what type of memories they can trigger. We reviewed 80 years of literature on reminiscences induced by EBS and added our own database. We classified them according to modern conceptions of memory. We observed a surprisingly large variety of reminiscences covering all aspects of declarative memory. However, most were poorly detailed and only a few were episodic. This result does not support theories of a highly stable and detailed memory, as initially postulated, and still widely believed as true by the general public. Moreover, memory networks could only be activated by some of their nodes: 94.1% of EBS were temporal, although the parietal and frontal lobes, also involved in memory networks, were stimulated. The qualitative nature of memories largely depended on the site of stimulation: EBS to rhinal cortex mostly induced personal semantic reminiscences, while only hippocampal EBS induced episodic memories. This result supports the view that EBS can activate memory in predictable ways in humans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Large conditional single-photon cross-phase modulation

    PubMed Central

    Hosseini, Mahdi; Duan, Yiheng; Vuletić, Vladan

    2016-01-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of π/6 (and up to π/3 by postselection on photons that remain in the system longer than average) between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. By upgrading to a state-of-the-art cavity, our system can reach a coherent phase shift of π at low loss, enabling deterministic and universal photonic quantum logic. PMID:27519798

  1. Using a Cray Y-MP as an array processor for a RISC Workstation

    NASA Technical Reports Server (NTRS)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  2. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  3. The application of a sparse, distributed memory to the detection, identification and manipulation of physical objects

    NASA Technical Reports Server (NTRS)

    Kanerva, P.

    1986-01-01

    To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.

  4. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

  5. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    PubMed

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  6. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  7. Cytomegalovirus Reinfections Stimulate CD8 T-Memory Inflation.

    PubMed

    Trgovcich, Joanne; Kincaid, Michelle; Thomas, Alicia; Griessl, Marion; Zimmerman, Peter; Dwivedi, Varun; Bergdall, Valerie; Klenerman, Paul; Cook, Charles H

    2016-01-01

    Cytomegalovirus (CMV) has been shown to induce large populations of CD8 T-effector memory cells that unlike central memory persist in large quantities following infection, a phenomenon commonly termed "memory inflation". Although murine models to date have shown very large and persistent CMV-specific T-cell expansions following infection, there is considerable variability in CMV-specific T-memory responses in humans. Historically such memory inflation in humans has been assumed a consequence of reactivation events during the life of the host. Because basic information about CMV infection/re-infection and reactivation in immune competent humans is not available, we used a murine model to test how primary infection, reinfection, and reactivation stimuli influence memory inflation. We show that low titer infections induce "partial" memory inflation of both mCMV specific CD8 T-cells and antibody. We show further that reinfection with different strains can boost partial memory inflation. Finally, we show preliminary results suggesting that a single strong reactivation stimulus does not stimulate memory inflation. Altogether, our results suggest that while high titer primary infections can induce memory inflation, reinfections during the life of a host may be more important than previously appreciated.

  8. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  9. Simulating Non-Fickian Transport across Péclet Regimes by doing Lévy Flights in the Rank Space of Velocity

    NASA Astrophysics Data System (ADS)

    Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.

    2017-12-01

    Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.

  10. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  11. Two Components of Aversive Memory in Drosophila, Anesthesia-Sensitive and Anesthesia-Resistant Memory, Require Distinct Domains Within the Rgk1 Small GTPase.

    PubMed

    Murakami, Satoshi; Minami-Ohtsubo, Maki; Nakato, Ryuichiro; Shirahige, Katsuhiko; Tabata, Tetsuya

    2017-05-31

    Multiple components have been identified that exhibit different stabilities for aversive olfactory memory in Drosophila These components have been defined by behavioral and genetic studies and genes specifically required for a specific component have also been identified. Intermediate-term memory generated after single cycle conditioning is divided into anesthesia-sensitive memory (ASM) and anesthesia-resistant memory (ARM), with the latter being more stable. We determined that the ASM and ARM pathways converged on the Rgk1 small GTPase and that the N-terminal domain-deleted Rgk1 was sufficient for ASM formation, whereas the full-length form was required for ARM formation. Rgk1 is specifically accumulated at the synaptic site of the Kenyon cells (KCs), the intrinsic neurons of the mushroom bodies, which play a pivotal role in olfactory memory formation. A higher than normal Rgk1 level enhanced memory retention, which is consistent with the result that Rgk1 suppressed Rac-dependent memory decay; these findings suggest that rgk1 bolsters ASM via the suppression of forgetting. We propose that Rgk1 plays a pivotal role in the regulation of memory stabilization by serving as a molecular node that resides at KC synapses, where the ASM and ARM pathway may interact. SIGNIFICANCE STATEMENT Memory consists of multiple components. Drosophila olfactory memory serves as a fundamental model with which to investigate the mechanisms that underlie memory formation and has provided genetic and molecular means to identify the components of memory, namely short-term, intermediate-term, and long-term memory, depending on how long the memory lasts. Intermediate memory is further divided into anesthesia-sensitive memory (ASM) and anesthesia-resistant memory (ARM), with the latter being more stable. We have identified a small GTPase in Drosophila , Rgk1, which plays a pivotal role in the regulation of olfactory memory stability. Rgk1 is required for both ASM and ARM. Moreover, N-terminal domain-deleted Rgk1 was sufficient for ASM formation, whereas the full-length form was required for ARM formation. Copyright © 2017 the authors 0270-6474/17/375496-•$15.00/0.

  12. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  13. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  14. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  15. Oscillatory mechanisms of process binding in memory.

    PubMed

    Klimesch, Wolfgang; Freunberger, Roman; Sauseng, Paul

    2010-06-01

    A central topic in cognitive neuroscience is the question, which processes underlie large scale communication within and between different neural networks. The basic assumption is that oscillatory phase synchronization plays an important role for process binding--the transient linking of different cognitive processes--which may be considered a special type of large scale communication. We investigate this question for memory processes on the basis of different types of oscillatory synchronization mechanisms. The reviewed findings suggest that theta and alpha phase coupling (and phase reorganization) reflect control processes in two large memory systems, a working memory and a complex knowledge system that comprises semantic long-term memory. It is suggested that alpha phase synchronization may be interpreted in terms of processes that coordinate top-down control (a process guided by expectancy to focus on relevant search areas) and access to memory traces (a process leading to the activation of a memory trace). An analogous interpretation is suggested for theta oscillations and the controlled access to episodic memories. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  16. User Requirements in Identifying Desired Works in a Large Library. Final Report.

    ERIC Educational Resources Information Center

    Lipetz, Ben-Ami

    Utilization of the card catalog in the main library (Sterling Memorial Library) of Yale University was studied over a period of more than a year. Traffic flow in the catalog was observed, and was used as the basis for scheduling interviews with a representative sample of catalog users at the moment of catalog use. More than 2000 interviews were…

  17. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov Websites

    requiring high CPU utilization or large amounts of memory should be run on the worker nodes. WinHPC02 is not associated data are removed when NREL worker status is discontinued. Users should make arrangements to save other users. Licenses are returned to the license pool when other users close the application or after

  18. Expert system shell to reason on large amounts of data

    NASA Technical Reports Server (NTRS)

    Giuffrida, Gionanni

    1994-01-01

    The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.

  19. Durability of carbon fiber reinforced shape memory polymer composites in space

    NASA Astrophysics Data System (ADS)

    Jang, Joon Hyeok; Hong, Seok Bin; Ahn, Yong San; Kim, Jin-Gyun; Nam, Yong-Youn; Lee, Geun Ho; Yu, Woong-Ryeol

    2016-04-01

    Shape memory polymer (SMP) is one of smart polymers which exhibit shape memory effect upon external stimuli. Recently, shape memory polymer composites (SMPCs) have been considered for space structure instead of shape memory alloys due to their deformability, lightweight and large recovery ratio, requiring characterization of their mechanical properties against harsh space environment and further prediction of the durability of SMPCs in space. As such, the durability of carbon fiber reinforced shape memory polymer composites (CF-SMPCs) was investigated using accelerated testing method based on short-term testing of CF-SMPCs in harsh condition. CF-SMPCs were prepared using woven carbon fabrics and a thermoset SMP via vacuum assisted resin transfer molding process. Bending tests with constant strain rate of CF-SMPCs were conducted using universal tensile machine (UTM) and Storage modulus test were conducted using dynamic mechanical thermal analysis (DMTA). Using the results, a master curve based on time-temperature superposition principle was then constructed, through which the mechanical properties of CF-SMPCs at harsh temperature were predicted. CF-SMPCs would be exposed to simulated space environments under ultra-violet radiations at various temperatures. The mechanical properties including flexural and tensile strength and shape memory properties of SMPCs would be measured using UTM before and after such exposures for comparison. Finally, the durability of SMPCs in space would be assessed by developing a degradation model of SMPC.

  20. Never forget a name: white matter connectivity predicts person memory

    PubMed Central

    Metoki, Athanasia; Alm, Kylie H.; Wang, Yin; Ngo, Chi T.; Olson, Ingrid R.

    2018-01-01

    Through learning and practice, we can acquire numerous skills, ranging from the simple (whistling) to the complex (memorizing operettas in a foreign language). It has been proposed that complex learning requires a network of brain regions that interact with one another via white matter pathways. One candidate white matter pathway, the uncinate fasciculus (UF), has exhibited mixed results for this hypothesis: some studies have shown UF involvement across a range of memory tasks, while other studies report null results. Here, we tested the hypothesis that the UF supports associative memory processes and that this tract can be parcellated into subtracts that support specific types of memory. Healthy young adults performed behavioral tasks (two face-name learning tasks, one word pair memory task) and underwent a diffusion-weighted imaging scan. Our results revealed that variation in UF microstructure was significantly associated with individual differences in performance on both face-name tasks, as well as the word association memory task. A UF sub-tract, functionally defined by its connectivity between face-selective regions in the anterior temporal lobe and orbitofrontal cortex, selectively predicted face-name learning. In contrast, connectivity between the fusiform face patch and both anterior face patches had no predictive validity. These findings suggest that there is a robust and replicable relationship between the UF and associative learning and memory. Moreover, this large white matter pathway can be subdivided to reveal discrete functional profiles. PMID:28646241

  1. The components of working memory updating: an experimental decomposition and individual differences.

    PubMed

    Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H

    2010-01-01

    Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  2. Lidocaine attenuates anisomycin-induced amnesia and release of norepinephrine in the amygdala

    PubMed Central

    Sadowski, Renee N.; Canal, Clint E.; Gold, Paul E.

    2011-01-01

    When administered near the time of training, protein synthesis inhibitors such as anisomycin impair later memory. A common interpretation of these findings is that memory consolidation requires new protein synthesis initiated by training. However, recent findings support an alternative interpretation that abnormally large increases in neurotransmitter release after injections of anisomycin may be responsible for producing amnesia. In the present study, a local anesthetic was administered prior to anisomycin injections in an attempt to mitigate neurotransmitter actions and thereby attenuate the resulting amnesia. Rats received lidocaine and anisomycin injections into the amygdala 130 and 120 min, respectively, prior to inhibitory avoidance training. Memory tests 48 hr later revealed that lidocaine attenuated anisomycin-induced amnesia. In other rats, in vivo microdialysis was performed at the site of amygdala infusion of lidocaine and anisomycin. As seen previously, anisomycin injections produced large increases in release of norepinephrine in the amygdala. Lidocaine attenuated the anisomycin-induced increase in release of norepinephrine but did not reverse anisomycin inhibition of protein synthesis, as assessed by c-Fos immunohistochemistry. These findings are consistent with past evidence suggesting that anisomycin causes amnesia by initiating abnormal release of neurotransmitters in response to the inhibition of protein synthesis. PMID:21453778

  3. Proteinortho: Detection of (Co-)orthologs in large-scale analysis

    PubMed Central

    2011-01-01

    Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987

  4. A Potential Spatial Working Memory Training Task to Improve Both Episodic Memory and Fluid Intelligence

    PubMed Central

    Rudebeck, Sarah R.; Bor, Daniel; Ormond, Angharad; O’Reilly, Jill X.; Lee, Andy C. H.

    2012-01-01

    One current challenge in cognitive training is to create a training regime that benefits multiple cognitive domains, including episodic memory, without relying on a large battery of tasks, which can be time-consuming and difficult to learn. By giving careful consideration to the neural correlates underlying episodic and working memory, we devised a computerized working memory training task in which neurologically healthy participants were required to monitor and detect repetitions in two streams of spatial information (spatial location and scene identity) presented simultaneously (i.e. a dual n-back paradigm). Participants’ episodic memory abilities were assessed before and after training using two object and scene recognition memory tasks incorporating memory confidence judgments. Furthermore, to determine the generalizability of the effects of training, we also assessed fluid intelligence using a matrix reasoning task. By examining the difference between pre- and post-training performance (i.e. gain scores), we found that the trainers, compared to non-trainers, exhibited a significant improvement in fluid intelligence after 20 days. Interestingly, pre-training fluid intelligence performance, but not training task improvement, was a significant predictor of post-training fluid intelligence improvement, with lower pre-training fluid intelligence associated with greater post-training gain. Crucially, trainers who improved the most on the training task also showed an improvement in recognition memory as captured by d-prime scores and estimates of recollection and familiarity memory. Training task improvement was a significant predictor of gains in recognition and familiarity memory performance, with greater training improvement leading to more marked gains. In contrast, lower pre-training recollection memory scores, and not training task improvement, led to greater recollection memory performance after training. Our findings demonstrate that practice on a single working memory task can potentially improve aspects of both episodic memory and fluid intelligence, and that an extensive training regime with multiple tasks may not be necessary. PMID:23209740

  5. A potential spatial working memory training task to improve both episodic memory and fluid intelligence.

    PubMed

    Rudebeck, Sarah R; Bor, Daniel; Ormond, Angharad; O'Reilly, Jill X; Lee, Andy C H

    2012-01-01

    One current challenge in cognitive training is to create a training regime that benefits multiple cognitive domains, including episodic memory, without relying on a large battery of tasks, which can be time-consuming and difficult to learn. By giving careful consideration to the neural correlates underlying episodic and working memory, we devised a computerized working memory training task in which neurologically healthy participants were required to monitor and detect repetitions in two streams of spatial information (spatial location and scene identity) presented simultaneously (i.e. a dual n-back paradigm). Participants' episodic memory abilities were assessed before and after training using two object and scene recognition memory tasks incorporating memory confidence judgments. Furthermore, to determine the generalizability of the effects of training, we also assessed fluid intelligence using a matrix reasoning task. By examining the difference between pre- and post-training performance (i.e. gain scores), we found that the trainers, compared to non-trainers, exhibited a significant improvement in fluid intelligence after 20 days. Interestingly, pre-training fluid intelligence performance, but not training task improvement, was a significant predictor of post-training fluid intelligence improvement, with lower pre-training fluid intelligence associated with greater post-training gain. Crucially, trainers who improved the most on the training task also showed an improvement in recognition memory as captured by d-prime scores and estimates of recollection and familiarity memory. Training task improvement was a significant predictor of gains in recognition and familiarity memory performance, with greater training improvement leading to more marked gains. In contrast, lower pre-training recollection memory scores, and not training task improvement, led to greater recollection memory performance after training. Our findings demonstrate that practice on a single working memory task can potentially improve aspects of both episodic memory and fluid intelligence, and that an extensive training regime with multiple tasks may not be necessary.

  6. Filamentary model in resistive switching materials

    NASA Astrophysics Data System (ADS)

    Jasmin, Alladin C.

    2017-12-01

    The need for next generation computer devices is increasing as the demand for efficient data processing increases. The amount of data generated every second also increases which requires large data storage devices. Oxide-based memory devices are being studied to explore new research frontiers thanks to modern advances in nanofabrication. Various oxide materials are studied as active layers for non-volatile memory. This technology has potential application in resistive random-access-memory (ReRAM) and can be easily integrated in CMOS technologies. The long term perspective of this research field is to develop devices which mimic how the brain processes information. To realize such application, a thorough understanding of the charge transport and switching mechanism is important. A new perspective in the multistate resistive switching based on current-induced filament dynamics will be discussed. A simple equivalent circuit of the device gives quantitative information about the nature of the conducting filament at different resistance states.

  7. Multinode reconfigurable pipeline computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)

    1989-01-01

    A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.

  8. Hippocampal-cortical interaction in decision making

    PubMed Central

    Yu, Jai Y.; Frank, Loren M.

    2014-01-01

    When making a decision it is often necessary to consider the available alternatives in order to choose the most appropriate option. This deliberative process, where the pros and cons of each option are considered, relies on memories of past actions and outcomes. The hippocampus and prefrontal cortex are required for memory encoding, memory retrieval and decision making, but it is unclear how these areas support deliberation. Here we examine the potential neural substrates of these processes in the rat. The rat is a powerful model to investigate the network mechanisms underlying deliberation in the mammalian brain given the anatomical and functional conservation of its hippocampus and prefrontal cortex to other mammalian systems. Importantly, it is amenable to large scale neural recording while performing laboratory tasks that exploit its natural decisionmaking behavior. Focusing on findings in the rat, we discuss how hippocampal-cortical interactions could provide a neural substrate for deliberative decision making. PMID:24530374

  9. [Method of file sorting for mini- and microcomputers].

    PubMed

    Chau, N; Legras, B; Benamghar, L; Martin, J

    1983-05-01

    The authors describe a new sorting method of files which belongs to the class of direct-addressing sorting methods. It makes use of a variant of the classical technique of 'virtual memory'. It is particularly well suited to mini- and micro-computers which have a small core memory (32 K words, for example) and are fitted with a direct-access peripheral device, such as a disc unit. When the file to be sorted is medium-sized (some thousand records), the running of the program essentially occurs inside the core memory and consequently, the method becomes very fast. This is very important because most medical files handled in our laboratory are in this category. However, the method is also suitable for big computers and large files; its implementation is easy. It does not require any magnetic tape unit, and it seems to us to be one of the fastest methods available.

  10. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  11. Study on advanced information processing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1992-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  12. What Infant Memory Tells Us about Infantile Amnesia: Long-Term Recall and Deferred Imitation

    PubMed Central

    Meltzoff, Andrew N.

    2013-01-01

    Long-term recall memory was assessed using a nonverbal method requiring subjects to reenact a past event from memory (deferred imitation). A large sample of infants (N = 192), evenly divided between 14- and 16-months old, was tested across two experiments. A delay of 2 months was used in Experiment 1 and a delay of 4 months in Experiment 2. In both experiments two treatment groups were used, In one treatment group, motor practice (immediate imitation) was allowed before the delay was imposed; in the other group, subjects were prevented from motor practice before the delay. Age-matched control groups were used lo assess the spontaneous production of the target acts in the absence of exposure to the model in both experiments. The results demonstrated significant deferred imitation for both treatment groups at both delay intervals, and moreover showed that infants retained and imitated multiple acts. These findings suggest that infants have a nonverbal declarative memory system that supports the recall of past events across long-term delays. The implications of these findings for the multiple memory system debate in cognitive science and neuroscience and for theories of infantile amnesia are considered. PMID:7622990

  13. Study on fault-tolerant processors for advanced launch system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1990-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  14. Voluntary running depreciates the requirement of Ca2+-stimulated cAMP signaling in synaptic potentiation and memory formation

    PubMed Central

    Zheng, Fei; Zhang, Ming; Ding, Qi; Sethna, Ferzin; Yan, Lily; Moon, Changjong; Yang, Miyoung

    2016-01-01

    Mental health and cognitive functions are influenced by both genetic and environmental factors. Although having active lifestyle with physical exercise improves learning and memory, how it interacts with the specific key molecular regulators of synaptic plasticity is largely unknown. Here, we examined the effects of voluntary running on long-term potentiation (LTP) and memory formation in mice lacking type 1 adenylyl cyclase (AC1), a neurospecific synaptic enzyme that contributes to Ca2+-stimulated cAMP production. Following 1 mo of voluntary running-wheel exercise, the impaired LTP and object recognition memory in AC1 knockout (KO) mice were significantly attenuated. Running up-regulated exon II mRNA level of BDNF (brain-derived neurotrophic factor), though it failed to increase exon I and IV mRNAs in the hippocampus of AC1 KO mice. Intrahippocampal infusion of recombinant BDNF was sufficient to rescue LTP and object recognition memory defects in AC1 KO mice. Therefore, voluntary running and exogenous BDNF application overcome the defective Ca2+-stimulated cAMP signaling. Our results also demonstrate that alteration in Ca2+-stimulated cAMP can affect the molecular outcome of physical exercise. PMID:27421897

  15. Experimental febrile seizures induce age-dependent structural plasticity and improve memory in mice.

    PubMed

    Tao, K; Ichikawa, J; Matsuki, N; Ikegaya, Y; Koyama, R

    2016-03-24

    Population-based studies have demonstrated that children with a history of febrile seizure (FS) perform better than age-matched controls at hippocampus-dependent memory tasks. Here, we report that FSs induce two distinct structural reorganizations in the hippocampus and bidirectionally modify future learning abilities in an age-dependent manner. Compared with age-matched controls, adult mice that had experienced experimental FSs induced by hyperthermia (HT) on postnatal day 14 (P14-HT) performed better in a cognitive task that requires dentate granule cells (DGCs). The enhanced memory performance correlated with an FS-induced persistent increase in the density of large mossy fiber terminals (LMTs) of the DGCs. The memory enhancement was not observed in mice that had experienced HT-induced seizures at P11 which exhibited abnormally located DGCs in addition to the increased LMT density. The ectopic DGCs of the P11-HT mice were abolished by the diuretic bumetanide, and this pharmacological treatment unveiled the masked memory enhancement. Thus, this work provides a novel basis for age-dependent structural plasticity in which FSs influence future brain function. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Radiation-Hardened Solid-State Drive

    NASA Technical Reports Server (NTRS)

    Sheldon, Douglas J.

    2010-01-01

    A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.

  17. Maternal scaffolding in a disadvantaged global context: The influence of working memory and cognitive capacities.

    PubMed

    Obradović, Jelena; Portilla, Ximena A; Tirado-Strayer, Nicole; Siyal, Saima; Rasheed, Muneera A; Yousafzai, Aisha K

    2017-03-01

    The current study focuses on maternal cognitive capacities as determinants of parenting in a highly disadvantaged global context, where children's experiences at home are often the 1st and only opportunity for learning and intellectual growth. In a large sample of 1,291 biological mothers of preschool-aged children in rural Pakistan, we examined the unique association of maternal working memory skills (independent of related cognitive capacities) with cognitively stimulating parenting behaviors. Path analysis revealed that directly assessed working memory, short-term memory, and verbal intelligence independently predicted greater levels of observed maternal scaffolding behaviors. Mothers from poorer families demonstrated lower levels of working memory, short-term memory, and verbal intelligence. However, mothers' participation in an early childhood parenting intervention that ended 2 years prior to this study contributed to greater levels of working memory skills and verbal intelligence. Further, all 3 domains of maternal cognitive capacity mediated the effect of family economic resources on maternal scaffolding, and verbal intelligence also mediated the effect of early parenting intervention exposure on maternal scaffolding. The study demonstrates the unique relevance of maternal working memory for scaffolding behaviors that required continuously monitoring the child's engagement, providing assistance, and minimizing external distractions. These results highlight the importance of directly targeting maternal cognitive capacities in poor women with little or no formal education, using a 2-generation intervention approach that includes activities known to promote parental executive functioning and literacy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Developmental Abilities to Form Chunks in Immediate Memory and Its Non-Relationship to Span Development.

    PubMed

    Mathy, Fabien; Fartoukh, Michael; Gauvrit, Nicolas; Guida, Alessandro

    2016-01-01

    Both adults and children -by the time they are 2-3 years old- have a general ability to recode information to increase memory efficiency. This paper aims to evaluate the ability of untrained children aged 6-10 years old to deploy such a recoding process in immediate memory. A large sample of 374 children were given a task of immediate serial report based on SIMON®, a classic memory game made of four colored buttons (red, green, yellow, blue) requiring players to reproduce a sequence of colors within which repetitions eventually occur. It was hypothesized that a primitive ability across all ages (since theoretically already available in toddlers) to detect redundancies allows the span to increase whenever information can be recoded on the fly. The chunkable condition prompted the formation of chunks based on the perceived structure of color repetition within to-be-recalled sequences of colors. Our result shows a similar linear improvement of memory span with age for both chunkable and non-chunkable conditions. The amount of information retained in immediate memory systematically increased for the groupable sequences across all age groups, independently of the average age-group span that was measured on sequences that contained fewer repetitions. This result shows that chunking gives young children an equal benefit as older children. We discuss the role of recoding in the expansion of capacity in immediate memory and the potential role of data compression in the formation of chunks in long-term memory.

  19. Developmental Abilities to Form Chunks in Immediate Memory and Its Non-Relationship to Span Development

    PubMed Central

    Mathy, Fabien; Fartoukh, Michael; Gauvrit, Nicolas; Guida, Alessandro

    2016-01-01

    Both adults and children –by the time they are 2–3 years old– have a general ability to recode information to increase memory efficiency. This paper aims to evaluate the ability of untrained children aged 6–10 years old to deploy such a recoding process in immediate memory. A large sample of 374 children were given a task of immediate serial report based on SIMON®, a classic memory game made of four colored buttons (red, green, yellow, blue) requiring players to reproduce a sequence of colors within which repetitions eventually occur. It was hypothesized that a primitive ability across all ages (since theoretically already available in toddlers) to detect redundancies allows the span to increase whenever information can be recoded on the fly. The chunkable condition prompted the formation of chunks based on the perceived structure of color repetition within to-be-recalled sequences of colors. Our result shows a similar linear improvement of memory span with age for both chunkable and non-chunkable conditions. The amount of information retained in immediate memory systematically increased for the groupable sequences across all age groups, independently of the average age-group span that was measured on sequences that contained fewer repetitions. This result shows that chunking gives young children an equal benefit as older children. We discuss the role of recoding in the expansion of capacity in immediate memory and the potential role of data compression in the formation of chunks in long-term memory. PMID:26941675

  20. Design and testing of the first 2D Prototype Vertically Integrated Pattern Recognition Associative Memory

    NASA Astrophysics Data System (ADS)

    Liu, T.; Deptuch, G.; Hoff, J.; Jindariani, S.; Joshi, S.; Olsen, J.; Tran, N.; Trimpl, M.

    2015-02-01

    An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the short latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking; in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.

  1. Design and testing of the first 2D Prototype Vertically Integrated Pattern Recognition Associative Memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T.; Deptuch, G.; Hoff, J.

    An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the shortmore » latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking, in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.« less

  2. High speed, very large (8 megabyte) first in/first out buffer memory (FIFO)

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1989-01-01

    A fast FIFO (First In First Out) memory buffer capable of storing data at rates of 100 megabytes per second. The invention includes a data packer which concatenates small bit data words into large bit data words, a memory array having individual data storage addresses adapted to store the large bit data words, a data unpacker into which large bit data words from the array can be read and reconstructed into small bit data words, and a controller to control and keep track of the individual data storage addresses in the memory array into which data from the packer is being written and data to the unpacker is being read.

  3. Solving the corner-turning problem for large interferometers

    NASA Astrophysics Data System (ADS)

    Lutomirski, Andrew; Tegmark, Max; Sanchez, Nevada J.; Stein, Leo C.; Urry, W. Lynn; Zaldarriaga, Matias

    2011-01-01

    The so-called corner-turning problem is a major bottleneck for radio telescopes with large numbers of antennas. The problem is essentially that of rapidly transposing a matrix that is too large to store on one single device; in radio interferometry, it occurs because data from each antenna need to be routed to an array of processors each of which will handle a limited portion of the data (say, a frequency range) but requires input from each antenna. We present a low-cost solution allowing the correlator to transpose its data in real time, without contending for bandwidth, via a butterfly network requiring neither additional RAM memory nor expensive general-purpose switching hardware. We discuss possible implementations of this using FPGA, CMOS, analog logic and optical technology, and conclude that the corner-turner cost can be small even for upcoming massive radio arrays.

  4. Dynamic reconfiguration of frontal brain networks during executive cognition in humans

    PubMed Central

    Braun, Urs; Schäfer, Axel; Walter, Henrik; Erk, Susanne; Romanczuk-Seiferth, Nina; Haddad, Leila; Schweiger, Janina I.; Grimm, Oliver; Heinz, Andreas; Tost, Heike; Meyer-Lindenberg, Andreas; Bassett, Danielle S.

    2015-01-01

    The brain is an inherently dynamic system, and executive cognition requires dynamically reconfiguring, highly evolving networks of brain regions that interact in complex and transient communication patterns. However, a precise characterization of these reconfiguration processes during cognitive function in humans remains elusive. Here, we use a series of techniques developed in the field of “dynamic network neuroscience” to investigate the dynamics of functional brain networks in 344 healthy subjects during a working-memory challenge (the “n-back” task). In contrast to a control condition, in which dynamic changes in cortical networks were spread evenly across systems, the effortful working-memory condition was characterized by a reconfiguration of frontoparietal and frontotemporal networks. This reconfiguration, which characterizes “network flexibility,” employs transient and heterogeneous connectivity between frontal systems, which we refer to as “integration.” Frontal integration predicted neuropsychological measures requiring working memory and executive cognition, suggesting that dynamic network reconfiguration between frontal systems supports those functions. Our results characterize dynamic reconfiguration of large-scale distributed neural circuits during executive cognition in humans and have implications for understanding impaired cognitive function in disorders affecting connectivity, such as schizophrenia or dementia. PMID:26324898

  5. Modular data acquisition system and its use in gas-filled detector readout at ESRF

    NASA Astrophysics Data System (ADS)

    Sever, F.; Epaud, F.; Poncet, F.; Grave, M.; Rey-Bakaikoa, V.

    1996-09-01

    Since 1992, 18 ESRF beamlines are open to users. Although the data acquisition requirements vary a lot from one beamline to another, we are trying to implement a modular data acquisition system architecture that would fit with the maximum number of acquisition projects at ESRF. Common to all of these systems are large acquisition memories and the requirement to visualize the data during an acquisition run and to transfer them quickly after the run to safe storage. We developed a general memory API handling the acquisition memory and its organization and another library that provides calls for transferring the data over TCP/IP sockets. Interesting utility programs using these libraries are the `online display' program and the `data transfer' program. The data transfer program as well as an acquisition control program rely on our well-established `device server model', which was originally designed for the machine control system and then successfully reused in beamline control systems. In the second half of this paper, the acquisition system for a 2D gas-filled detector is presented, which is one of the first concrete examples using the proposed modular data acquisition architecture.

  6. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  7. Persistence of Gender Related-Effects on Visuo-Spatial and Verbal Working Memory in Right Brain-Damaged Patients.

    PubMed

    Piccardi, Laura; Matano, Alessandro; D'Antuono, Giovanni; Marin, Dario; Ciurli, Paola; Incoccia, Chiara; Verde, Paola; Guariglia, Paola

    2016-01-01

    The aim of the present study was to verify if gender differences in verbal and visuo-spatial working memory would persist following right cerebral lesions. To pursue our aim we investigated a large sample (n. 346) of right brain-damaged patients and healthy participants (n. 272) for the presence of gender effects in performing Corsi and Digit Test. We also assessed a subgroup of patients (n. 109) for the nature (active vs. passive) of working memory tasks. We tested working memory (WM) administering the Corsi Test (CBT) and the Digit Span (DS) using two different versions: forward (fCBT and fDS), subjects were required to repeat stimuli in the same order that they were presented; and backward (bCBT and bDS), subjects were required to repeat stimuli in the opposite order of presentation. In this way, passive storage and active processing of working memory were assessed. Our results showed the persistence of gender-related effects in spite of the presence of right brain lesions. We found that men outperformed women both in CBT and DS, regardless of active and passive processing of verbal and visuo-spatial stimuli. The presence of visuo-spatial disorders (i.e., hemineglect) can affect the performance on Corsi Test. In our sample, men and women were equally affected by hemineglect, therefore it did not mask the gender effect. Generally speaking, the persistence of the men's superiority in visuo-spatial tasks may be interpreted as a protective factor, at least for men, within other life factors such as level of education or kind of profession before retirement.

  8. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  9. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  10. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  11. Working, declarative and procedural memory in specific language impairment

    PubMed Central

    Lum, Jarrad A.G.; Conti-Ramsden, Gina; Page, Debra; Ullman, Michael T.

    2012-01-01

    According to the Procedural Deficit Hypothesis (PDH), abnormalities of brain structures underlying procedural memory largely explain the language deficits in children with specific language impairment (SLI). These abnormalities are posited to result in core deficits of procedural memory, which in turn explain the grammar problems in the disorder. The abnormalities are also likely to lead to problems with other, non-procedural functions, such as working memory, that rely at least partly on the affected brain structures. In contrast, declarative memory is expected to remain largely intact, and should play an important compensatory role for grammar. These claims were tested by examining measures of working, declarative and procedural memory in 51 children with SLI and 51 matched typically-developing (TD) children (mean age 10). Working memory was assessed with the Working Memory Test Battery for Children, declarative memory with the Children’s Memory Scale, and procedural memory with a visuo-spatial Serial Reaction Time task. As compared to the TD children, the children with SLI were impaired at procedural memory, even when holding working memory constant. In contrast, they were spared at declarative memory for visual information, and at declarative memory in the verbal domain after controlling for working memory and language. Visuo-spatial short-term memory was intact, whereas verbal working memory was impaired, even when language deficits were held constant. Correlation analyses showed neither visuo-spatial nor verbal working memory was associated with either lexical or grammatical abilities in either the SLI or TD children. Declarative memory correlated with lexical abilities in both groups of children. Finally, grammatical abilities were associated with procedural memory in the TD children, but with declarative memory in the children with SLI. These findings replicate and extend previous studies of working, declarative and procedural memory in SLI. Overall, we suggest that the evidence largely supports the predictions of the PDH. PMID:21774923

  12. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  13. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  14. Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics

    NASA Astrophysics Data System (ADS)

    Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal

    2017-12-01

    Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.

  15. Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases

    NASA Astrophysics Data System (ADS)

    Morifuji, Masato

    2018-01-01

    We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.

  16. Activation of LVGCCs and CB1 Receptors Required for Destabilization of Reactivated Contextual Fear Memories

    ERIC Educational Resources Information Center

    Suzuki, Akinobu; Mukawa, Takuya; Tsukagoshi, Akinori; Frankland, Paul W.; Kida, Satoshi

    2008-01-01

    Previous studies have shown that inhibiting protein synthesis shortly after reactivation impairs the subsequent expression of a previously consolidated fear memory. This has suggested that reactivation returns a memory to a labile state and that protein synthesis is required for the subsequent restabilization of memory. While the molecular…

  17. Time and resource limits on working memory: cross-age consistency in counting span performance.

    PubMed

    Ransdell, Sarah; Hecht, Steven

    2003-12-01

    This longitudinal study separated resource demand effects from those of retention interval in a counting span task among 100 children tested in grade 2 and again in grades 3 and 4. A last card large counting span condition had an equivalent memory load to a last card small, but the last card large required holding the count over a longer retention interval. In all three waves of assessment, the last card large condition was found to be less accurate than the last card small. A model predicting reading comprehension showed that age was a significant predictor when entered first accounting for 26% of the variance, but counting span accounted for a further 22% of the variance. Span at Wave 1 accounted for significant unique variance at Wave 2 and at Wave 3. Results were similar for math calculation with age accounting for 31% of the variance and counting span accounting for a further 34% of the variance. Span at Wave 1 explained unique variance in math at Wave 2 and at Wave 3.

  18. LARGE, an intellectual disability-associated protein, regulates AMPA-type glutamate receptor trafficking and memory.

    PubMed

    Seo, Bo Am; Cho, Taesup; Lee, Daniel Z; Lee, Joong-Jae; Lee, Boyoung; Kim, Seong-Wook; Shin, Hee-Sup; Kang, Myoung-Goo

    2018-06-18

    Mutations in the human LARGE gene result in severe intellectual disability and muscular dystrophy. How LARGE mutation leads to intellectual disability, however, is unclear. In our proteomic study, LARGE was found to be a component of the AMPA-type glutamate receptor (AMPA-R) protein complex, a main player for learning and memory in the brain. Here, our functional study of LARGE showed that LARGE at the Golgi apparatus (Golgi) negatively controlled AMPA-R trafficking from the Golgi to the plasma membrane, leading to down-regulated surface and synaptic AMPA-R targeting. In LARGE knockdown mice, long-term potentiation (LTP) was occluded by synaptic AMPA-R overloading, resulting in impaired contextual fear memory. These findings indicate that the fine-tuning of AMPA-R trafficking by LARGE at the Golgi is critical for hippocampus-dependent memory in the brain. Our study thus provides insights into the pathophysiology underlying cognitive deficits in brain disorders associated with intellectual disability.

  19. FPGA-Based, Self-Checking, Fault-Tolerant Computers

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2004-01-01

    A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.

  20. Metronomic cyclophosphamide eradicates large implanted GL261 gliomas by activating antitumor Cd8+ T-cell responses and immune memory

    PubMed Central

    Wu, Junjie; Waxman, David J

    2015-01-01

    Cancer chemotherapy using cytotoxic drugs can induce immunogenic tumor cell death; however, dosing regimens and schedules that enable single-agent chemotherapy to induce adaptive immune-dependent ablation of large, established tumors with activation of long-term immune memory have not been identified. Here, we investigate this issue in a syngeneic, implanted GL261 glioma model in immune-competent mice given cyclophosphamide on a 6-day repeating metronomic schedule. Two cycles of metronomic cyclophosphamide treatment induced sustained upregulation of tumor-associated CD8+ cytotoxic T lymphocyte (CTL) cells, natural killer (NK) cells, macrophages, and other immune cells. Expression of CTL- and NK–cell-shared effectors peaked on Day 6, and then declined by Day 9 after the second cyclophosphamide injection and correlated inversely with the expression of the regulatory T cell (Treg) marker Foxp3. Sustained tumor regression leading to tumor ablation was achieved after several cyclophosphamide treatment cycles. Tumor ablation required CD8+ T cells, as shown by immunodepletion studies, and was associated with immunity to re-challenge with GL261 glioma cells, but not B16-F10 melanoma or Lewis lung carcinoma cells. Rejection of GL261 tumor re-challenge was associated with elevated CTLs in blood and increased CTL infiltration in tumors, consistent with the induction of long-term, specific CD8+ T-cell anti-GL261 tumor memory. Co-depletion of CD8+ T cells and NK cells did not inhibit tumor regression beyond CD8+ T-cell depletion alone, suggesting that the metronomic cyclophosphamide-activated NK cells function via CD8a+ T cells. Taken together, these findings provide proof-of-concept that single-agent chemotherapy delivered on an optimized metronomic schedule can eradicate large, established tumors and induce long-term immune memory. PMID:26137402

  1. Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory.

    PubMed

    Samaha, Jason; Postle, Bradley R

    2017-11-29

    Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).

  2. A requirement for the immediate early gene zif268 in reconsolidation of recognition memory after retrieval.

    PubMed

    Bozon, Bruno; Davis, Sabrina; Laroche, Serge

    2003-11-13

    Recent research has revived interest in the possibility that previously consolidated memories need to reconsolidate when recalled to return to accessible long-term memory. Evidence suggests that both consolidation and reconsolidation of certain types of memory require protein synthesis, but whether similar molecular mechanisms are involved remains unclear. Here, we explore whether zif268, an activity-dependent inducible immediate early gene (IEG) required for consolidation of new memories, is also recruited for reconsolidation of recognition memory following reactivation. We show that when a consolidated memory for objects is recalled, zif268 mutant mice are impaired in further long-term but not short-term recognition memory. The impairment is specific to reactivation with the previously memorized objects in the relevant context, occurs in delayed recall, and does not recover over several days. These findings indicate that IEG-mediated transcriptional regulation in neurons is one common molecular mechanism for the storage of newly formed and reactivated recognition memories.

  3. Insulin signaling is acutely required for long-term memory in Drosophila.

    PubMed

    Chambers, Daniel B; Androschuk, Alaura; Rosenfelt, Cory; Langer, Steven; Harding, Mark; Bolduc, Francois V

    2015-01-01

    Memory formation has been shown recently to be dependent on energy status in Drosophila. A well-established energy sensor is the insulin signaling (InS) pathway. Previous studies in various animal models including human have revealed the role of insulin levels in short-term memory but its role in long-term memory remains less clear. We therefore investigated genetically the spatial and temporal role of InS using the olfactory learning and long-term memory model in Drosophila. We found that InS is involved in both learning and memory. InS in the mushroom body is required for learning and long-term memory whereas long-term memory specifically is impaired after InS signaling disruption in the ellipsoid body, where it regulates the level of p70s6k, a downstream target of InS and a marker of protein synthesis. Finally, we show also that InS is acutely required for long-term memory formation in adult flies.

  4. A processing approach to the working memory/long-term memory distinction: evidence from the levels-of-processing span task.

    PubMed

    Rose, Nathan S; Craik, Fergus I M

    2012-07-01

    Recent theories suggest that performance on working memory (WM) tasks involves retrieval from long-term memory (LTM). To examine whether WM and LTM tests have common principles, Craik and Tulving's (1975) levels-of-processing paradigm, which is known to affect LTM, was administered as a WM task: Participants made uppercase, rhyme, or category-membership judgments about words, and immediate recall of the words was required after every 3 or 8 processing judgments. In Experiment 1, immediate recall did not demonstrate a levels-of-processing effect, but a subsequent LTM test (delayed recognition) of the same words did show a benefit of deeper processing. Experiment 2 showed that surprise immediate recall of 8-item lists did demonstrate a levels-of-processing effect, however. A processing account of the conditions in which levels-of-processing effects are and are not found in WM tasks was advanced, suggesting that the extent to which levels-of-processing effects are similar between WM and LTM tests largely depends on the amount of disruption to active maintenance processes. 2012 APA, all rights reserved

  5. Working memory capacity and the scope and control of attention.

    PubMed

    Shipstead, Zach; Harrison, Tyler L; Engle, Randall W

    2015-08-01

    Complex span and visual arrays are two common measures of working memory capacity that are respectively treated as measures of attention control and storage capacity. A recent analysis of these tasks concluded that (1) complex span performance has a relatively stronger relationship to fluid intelligence and (2) this is due to the requirement that people engage control processes while performing this task. The present study examines the validity of these conclusions by examining two large data sets that include a more diverse set of visual arrays tasks and several measures of attention control. We conclude that complex span and visual arrays account for similar amounts of variance in fluid intelligence. The disparity relative to the earlier analysis is attributed to the present study involving a more complete measure of the latent ability underlying the performance of visual arrays. Moreover, we find that both types of working memory task have strong relationships to attention control. This indicates that the ability to engage attention in a controlled manner is a critical aspect of working memory capacity, regardless of the type of task that is used to measure this construct.

  6. Protein-Based Three-Dimensional Memories and Associative Processors

    NASA Astrophysics Data System (ADS)

    Birge, Robert

    2008-03-01

    The field of bioelectronics has benefited from the fact that nature has often solved problems of a similar nature to those which must be solved to create molecular electronic or photonic devices that operate with efficiency and reliability. Retinal proteins show great promise in bioelectronic devices because they operate with high efficiency (˜0.65%), high cyclicity (>10^7), operate over an extended wavelength range (360 -- 630 nm) and can convert light into changes in voltage, pH, absorption or refractive index. This talk will focus on a retinal protein called bacteriorhodopsin, the proton pump of the organism Halobacterium salinarum. Two memories based on this protein will be described. The first is an optical three-dimensional memory. This memory stores information using volume elements (voxels), and provides as much as a thousand-fold improvement in effective capacity over current technology. A unique branching reaction of a variant of bacteriorhodopsin is used to turn each protein into an optically addressed latched AND gate. Although three working prototypes have been developed, a number of cost/performance and architectural issues must be resolved prior to commercialization. The major issue is that the native protein provides a very inefficient branching reaction. Genetic engineering has improved performance by nearly 500-fold, but a further order of magnitude improvement is needed. Protein-based holographic associative memories will also be discussed. The human brain stores and retrieves information via association, and human intelligence is intimately connected to the nature and enormous capacity of this associative search and retrieval process. To a first order approximation, creativity can be viewed as the association of two seemingly disparate concepts to form a totally new construct. Thus, artificial intelligence requires large scale associative memories. Current computer hardware does not provide an optimal environment for creating artificial intelligence due to the serial nature of random access memories. Software cannot provide a satisfactory work-around that does not introduce unacceptable latency. Holographic associative memories provide a useful approach to large scale associative recall. Bacteriorhodopsin has long been recognized for its outstanding holographic properties, and when utilized in the Paek and Psaltis design, provides a high-speed real-time associative memory with variable thresholding and feedback. What remains is to make an associative memory capable of high-speed association and long-term data storage. The use of directed evolution to create a protein with the necessary unique properties will be discussed.

  7. Enabling Graph Appliance for Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun

    2015-01-01

    In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less

  8. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  9. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  10. NRL Review 1991

    DTIC Science & Technology

    1991-05-01

    contact between averaging of the strong nuclear dipolar interaction the components will result at the interfacial region in this sample. In contrast, tho...and a sea marker to help save survivors $1.5 million for the institution in 1916, but of disasters at sea. A thermal diffusion process wartime delays...memory for large simulations on parallel intervening medium. Accomplishing this research array processors and immediate displays of results requires

  11. A Specific Role for Hippocampal Mossy Fiber's Zinc in Rapid Storage of Emotional Memories

    ERIC Educational Resources Information Center

    Ceccom, Johnatan; Halley, Hélène; Daumas, Stéphanie; Lassalle, Jean Michel

    2014-01-01

    We investigated the specific role of zinc present in large amounts in the synaptic vesicles of mossy fibers and coreleased with glutamate in the CA3 region. In previous studies, we have shown that blockade of zinc after release has no effect on the consolidation of spatial learning, while zinc is required for the consolidation of contextual fear…

  12. Memory bias for negative emotional words in recognition memory is driven by effects of category membership

    PubMed Central

    White, Corey N.; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M.; Ratcliff, Roger

    2014-01-01

    Recognition memory studies often find that emotional items are more likely than neutral items to be labeled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium, or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorized words were presented in the lists. Similar, though weaker, effects were observed with categorized words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership. PMID:24303902

  13. Memory bias for negative emotional words in recognition memory is driven by effects of category membership.

    PubMed

    White, Corey N; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M; Ratcliff, Roger

    2014-01-01

    Recognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorised words were presented in the lists. Similar, though weaker, effects were observed with categorised words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership.

  14. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  15. The role of working memory in inferential sentence comprehension.

    PubMed

    Pérez, Ana Isabel; Paolieri, Daniela; Macizo, Pedro; Bajo, Teresa

    2014-08-01

    Existing literature on inference making is large and varied. Trabasso and Magliano (Discourse Process 21(3):255-287, 1996) proposed the existence of three types of inferences: explicative, associative and predictive. In addition, the authors suggested that these inferences were related to working memory (WM). In the present experiment, we investigated whether WM capacity plays a role in our ability to answer comprehension sentences that require text information based on these types of inferences. Participants with high and low WM span read two narratives with four paragraphs each. After each paragraph was read, they were presented with four true/false comprehension sentences. One required verbatim information and the other three implied explicative, associative and predictive inferential information. Results demonstrated that only the explicative and predictive comprehension sentences required WM: participants with high verbal WM were more accurate in giving explanations and also faster at making predictions relative to participants with low verbal WM span; in contrast, no WM differences were found in the associative comprehension sentences. These results are interpreted in terms of the causal nature underlying these types of inferences.

  16. Ecstasy (MDMA) and memory function: a meta-analytic update.

    PubMed

    Laws, Keith R; Kokkalis, Joy

    2007-08-01

    A meta-analysis was conducted to examine the impact of recreational ecstasy use on short-term memory (STM), long-term memory (LTM), verbal and visual memory. We located 26 studies containing memory data for ecstasy and non-ecstasy users from which effect sizes could be derived. The analyses provided measures of STM and LTM in 610 and 439 ecstasy users and revealed moderate-to-large effect sizes (Cohen's d) of d = -0.63 and d = -0.87, respectively. The difference between STM versus LTM was non-significant. The effect size for verbal memory was large (d = -1.00) and significantly larger than the small effect size for visual memory (d = -0.27). Indeed, our analyses indicate that visual memory may be affected more by concurrent cannabis use. Finally, we found that the total lifetime number of ecstasy tablets consumed did not significantly predict memory performance. Copyright 2007 John Wiley & Sons, Ltd.

  17. Enhancement of fear memory by retrieval through reconsolidation

    PubMed Central

    Fukushima, Hotaka; Zhang, Yue; Archbold, Georgia; Ishikawa, Rie; Nader, Karim; Kida, Satoshi

    2014-01-01

    Memory retrieval is considered to have roles in memory enhancement. Recently, memory reconsolidation was suggested to reinforce or integrate new information into reactivated memory. Here, we show that reactivated inhibitory avoidance (IA) memory is enhanced through reconsolidation under conditions in which memory extinction is not induced. This memory enhancement is mediated by neurons in the amygdala, hippocampus, and medial prefrontal cortex (mPFC) through the simultaneous activation of calcineurin-induced proteasome-dependent protein degradation and cAMP responsive element binding protein-mediated gene expression. Interestingly, the amygdala is required for memory reconsolidation and enhancement, whereas the hippocampus and mPFC are required for only memory enhancement. Furthermore, memory enhancement triggered by retrieval utilizes distinct mechanisms to strengthen IA memory by additional learning that depends only on the amygdala. Our findings indicate that reconsolidation functions to strengthen the original memory and show the dynamic nature of reactivated memory through protein degradation and gene expression in multiple brain regions. DOI: http://dx.doi.org/10.7554/eLife.02736.001 PMID:24963141

  18. The effect of nonadiabaticity on the efficiency of quantum memory based on an optical cavity

    NASA Astrophysics Data System (ADS)

    Veselkova, N. G.; Sokolov, I. V.

    2017-07-01

    Quantum efficiency is an important characteristic of quantum memory devices that are aimed at recording the quantum state of light signals and its storing and reading. In the case of memory based on an ensemble of cold atoms placed in an optical cavity, the efficiency is restricted, in particular, by relaxation processes in the system of active atomic levels. We show how the effect of the relaxation on the quantum efficiency can be determined in a regime of the memory usage in which the evolution of signals in time is not arbitrarily slow on the scale of the field lifetime in the cavity and when the frequently used approximation of the adiabatic elimination of the quantized cavity mode field cannot be applied. Taking into account the effect of the nonadiabaticity on the memory quality is of interest in view of the fact that, in order to increase the field-medium coupling parameter, a higher cavity quality factor is required, whereas storing and processing of sequences of many signals in the memory implies that their duration is reduced. We consider the applicability of the well-known efficiency estimates via the system cooperativity parameter and estimate a more general form. In connection with the theoretical description of the memory of the given type, we also discuss qualitative differences in the behavior of a random source introduced into the Heisenberg-Langevin equations for atomic variables in the cases of a large and a small number of atoms.

  19. HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.

    PubMed

    O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D

    2015-04-01

    The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. The Relationship between Processing and Storage in Working Memory Span: Not Two Sides of the Same Coin

    ERIC Educational Resources Information Center

    Maehara, Yukio; Saito, Satoru

    2007-01-01

    In working memory (WM) span tests, participants maintain memory items while performing processing tasks. In this study, we examined the impact of task processing requirements on memory-storage activities, looking at the stimulus order effect and the impact of storage requirements on processing activities, testing the processing time effect in WM…

  1. Long-Term Memory for Instrumental Responses Does Not Undergo Protein Synthesis-Dependent Reconsolidation upon Retrieval

    ERIC Educational Resources Information Center

    Hernandez, Pepe J.; Kelley, Ann E.

    2004-01-01

    Recent evidence indicates that certain forms of memory, upon recall, may return to a labile state requiring the synthesis of new proteins in order to preserve or reconsolidate the original memory trace. While the initial consolidation of "instrumental memories" has been shown to require de novo protein synthesis in the nucleus accumbens, it is not…

  2. ERP C250 Shows the Elderly (Cognitively Normal, Alzheimer’s Disease) Store More Stimuli in Short-Term Memory than Young Adults Do

    PubMed Central

    Chapman, Robert M.; Gardner, Margaret N.; Mapstone, Mark; Klorman, Rafael; Porsteinsson, Anton P.; Dupree, Haley M.; Antonsdottir, Inga M.; Kamalyan, Lily

    2016-01-01

    Objective To determine how aging and dementia affect the brain’s initial storing of task-relevant and irrelevant information in short-term memory. Methods We used brain Event-Related Potentials (ERPs) to measure short-term memory storage (ERP component C250) in 36 Young Adults, 36 Normal Elderly, and 36 early-stage AD subjects. Participants performed the Number-Letter task, a cognitive paradigm requiring memory storage of a first relevant stimulus to compare it with a second stimulus. Results In Young Adults, C250 was more positive for the first task-relevant stimulus compared to all other stimuli. C250 in Normal Elderly and AD subjects was roughly the same to relevant and irrelevant stimuli in intratrial parts 1–3 but not 4. The AD group had lower C250 to relevant stimuli in part 1. Conclusions Both normal aging and dementia cause less differentiation of relevant from irrelevant information in initial storage. There was a large aging effect involving differences in the pattern of C250 responses of the Young Adult versus the Normal Elderly/AD groups. Also, a potential dementia effect was obtained. Significance C250 is a candidate tool for measuring short-term memory performance on a biological level, as well as a potential marker for memory changes due to normal aging and dementia. PMID:27178862

  3. The efficacy of cognitive prosthetic technology for people with memory impairments: a systematic review and meta-analysis.

    PubMed

    Jamieson, Matthew; Cullen, Breda; McGee-Lennon, Marilyn; Brewster, Stephen; Evans, Jonathan J

    2014-01-01

    Technology can compensate for memory impairment. The efficacy of assistive technology for people with memory difficulties and the methodology of selected studies are assessed. A systematic search was performed and all studies that investigated the impact of technology on memory performance for adults with impaired memory resulting from acquired brain injury (ABI) or a degenerative disease were included. Two 10-point scales were used to compare each study to an ideally reported single case experimental design (SCED) study (SCED scale; Tate et al., 2008 ) or randomised control group study (PEDro-P scale; Maher, Sherrington, Herbert, Moseley, & Elkins, 2003 ). Thirty-two SCED (mean = 5.9 on the SCED scale) and 11 group studies (mean = 4.45 on the PEDro-P scale) were found. Baseline and intervention performance for each participant in the SCED studies was re-calculated using non-overlap of all pairs (Parker & Vannest, 2009 ) giving a mean score of 0.85 on a 0 to 1 scale (17 studies, n = 36). A meta-analysis of the efficacy of technology vs. control in seven group studies gave a large effect size (d = 1.27) (n = 147). It was concluded that prosthetic technology can improve performance on everyday tasks requiring memory. There is a specific need for investigations of technology for people with degenerative diseases.

  4. Efficient checkpointing schemes for depletion perturbation solutions on memory-limited architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stripling, H. F.; Adams, M. L.; Hawkins, W. D.

    2013-07-01

    We describe a methodology for decreasing the memory footprint and machine I/O load associated with the need to access a forward solution during an adjoint solve. Specifically, we are interested in the depletion perturbation equations, where terms in the adjoint Bateman and transport equations depend on the forward flux solution. Checkpointing is the procedure of storing snapshots of the forward solution to disk and using these snapshots to recompute the parts of the forward solution that are necessary for the adjoint solve. For large problems, however, the storage cost of just a few copies of an angular flux vector canmore » exceed the available RAM on the host machine. We propose a methodology that does not checkpoint the angular flux vector; instead, we write and store converged source moments, which are typically of a much lower dimension than the angular flux solution. This reduces the memory footprint and I/O load of the problem, but requires that we perform single sweeps to reconstruct flux vectors on demand. We argue that this trade-off is exactly the kind of algorithm that will scale on advanced, memory-limited architectures. We analyze the cost, in terms of FLOPS and memory footprint, of five checkpointing schemes. We also provide computational results that support the analysis and show that the memory-for-work trade off does improve time to solution. (authors)« less

  5. Still searching for the engram.

    PubMed

    Eichenbaum, Howard

    2016-09-01

    For nearly a century, neurobiologists have searched for the engram-the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories.

  6. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  7. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  8. Changing patterns of brain activation during maze learning.

    PubMed

    Van Horn, J D; Gold, J M; Esposito, G; Ostrem, J L; Mattay, V; Weinberger, D R; Berman, K F

    1998-05-18

    Recent research has found that patterns of brain activation involving the frontal cortex during novel task performance change dramatically following practice and repeat performance. Evidence for differential left vs. right frontal lobe activation, respectively, during episodic memory encoding and retrieval has also been reported. To examine these potentially related issues regional cerebral blood flow (rCBF) was measured in 15 normal volunteers using positron emission tomography (PET) during the naive and practiced performance of a maze task paradigm. SPM analysis indicated a largely right-sided, frontal lobe activation during naive performance. Following training and practice, performance of the same maze task elicited a more posterior pattern of rCBF activation involving posterior cingulate and precuneus. The change in the pattern of rCBF activation between novel and practiced task conditions agrees with results found in previous studies using repeat task methodology, and indicates that the neural circuitry required for encoding novel task information differs from that required when the same task has become familiar and information is being recalled. The right-sided preponderance of activation during naive performance may relate to task novelty and the spatially-based nature of the stimuli, whereas posterior areas activated during repeat performance are those previously found to be associated with visuospatial memory recall. Activation of these areas, however, does not agree with previously reported findings of left-sided activation during verbal episodic memory encoding and right-sided activation during retrieval, suggesting different neural substrates for verbal and visuospatial processing within memory. Copyright 1998 Elsevier Science B.V.

  9. Inverted-U shaped dopamine actions on human working memory and cognitive control

    PubMed Central

    Cools, R; D’Esposito, M

    2011-01-01

    Brain dopamine has long been implicated in cognitive control processes, including working memory. However, the precise role of dopamine in cognition is not well understood, partly because there is large variability in the response to dopaminergic drugs both across different behaviors and across different individuals. We review evidence from a series of studies with experimental animals, healthy humans and patients with Parkinson’s disease, which highlight two important factors that contribute to this large variability. First, the existence of an optimum dopamine level for cognitive function implicates the need to take into account baseline levels of dopamine when isolating dopamine’s effects. Second, cognitive control is a multi-factorial phenomenon, requiring a dynamic balance between cognitive stability and cognitive flexibility. These distinct components might implicate the prefrontal cortex and the striatum respectively. Manipulating dopamine will thus have paradoxical consequences for distinct cognitive control processes depending on distinct basal or optimal levels of dopamine in different brain regions. PMID:21531388

  10. Modeling and development of a twisting wing using inductively heated shape memory alloy actuators

    NASA Astrophysics Data System (ADS)

    Saunders, Robert N.; Hartl, Darren J.; Boyd, James G.; Lagoudas, Dimitris C.

    2015-04-01

    Wing twisting has been shown to improve aircraft flight performance. The potential benefits of a twisting wing are often outweighed by the mass of the system required to twist the wing. Shape memory alloy (SMA) actuators repeatedly demonstrate abilities and properties that are ideal for aerospace actuation systems. Recent advances have shown an SMA torsional actuator that can be manufactured and trained with the ability to generate large twisting deformations under substantial loading. The primary disadvantage of implementing large SMA actuators has been their slow actuation time compared to conventional actuators. However, inductive heating of an SMA actuator allows it to generate a full actuation cycle in just seconds rather than minutes while still . The aim of this work is to demonstrate an experimental wing being twisted to approximately 10 degrees by using an inductively heated SMA torsional actuator. This study also considers a 3-D electromagnetic thermo-mechanical model of the SMA-wing system and compare these results to experiments to demonstrate modeling capabilities.

  11. A class of hybrid finite element methods for electromagnetics: A review

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Chatterjee, A.; Gong, J.

    1993-01-01

    Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.

  12. Hippocampal activation during the recall of remote spatial memories in radial maze tasks.

    PubMed

    Schlesiger, Magdalene I; Cressey, John C; Boublil, Brittney; Koenig, Julie; Melvin, Neal R; Leutgeb, Jill K; Leutgeb, Stefan

    2013-11-01

    Temporally graded retrograde amnesia is observed in human patients with medial temporal lobe lesions as well as in animal models of medial temporal lobe lesions. A time-limited role for these structures in memory recall has also been suggested by the observation that the rodent hippocampus and entorhinal cortex are activated during the retrieval of recent but not of remote memories. One notable exception is the recall of remote memories for platform locations in the water maze, which requires an intact hippocampus and results in hippocampal activation irrespective of the age of the memory. These findings raise the question whether the hippocampus is always involved in the recall of spatial memories or, alternatively, whether it might be required for procedural computations in the water maze task, such as for calculating a path to a hidden platform. We performed spatial memory testing in radial maze tasks to distinguish between these possibilities. Radial maze tasks require a choice between spatial locations on a center platform and thus have a lesser requirement for navigation than the water maze. However, we used a behavioral design in the radial maze that retained other aspects of the standard water maze task, such as the use of multiple start locations and retention testing in a single trial. Using the immediate early gene c-fos as a marker for neuronal activation, we found that all hippocampal subregions were more activated during the recall of remote compared to recent spatial memories. In areas CA3 and CA1, activation during remote memory testing was higher than in rats that were merely reexposed to the testing environment after the same time interval. Conversely, Fos levels in the dentate gyrus were increased after retention testing to the extent that was also observed in the corresponding exposure control group. This pattern of hippocampal activation was also obtained in a second version of the task that only used a single start arm instead of multiple start arms. The CA3 and CA1 activation during remote memory recall is consistent with the interpretation that an older memory might require increased pattern completion and/or relearning after longer time intervals. Irrespective of whether the hippocampus is required for remote memory recall, the hippocampus might engage in computations that either support recall of remote memories or that update remote memories. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Numericware i: Identical by State Matrix Calculator

    PubMed Central

    Kim, Bongsong; Beavis, William D

    2017-01-01

    We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375

  14. Persistence of Gender Related-Effects on Visuo-Spatial and Verbal Working Memory in Right Brain-Damaged Patients

    PubMed Central

    Piccardi, Laura; Matano, Alessandro; D’Antuono, Giovanni; Marin, Dario; Ciurli, Paola; Incoccia, Chiara; Verde, Paola; Guariglia, Paola

    2016-01-01

    The aim of the present study was to verify if gender differences in verbal and visuo-spatial working memory would persist following right cerebral lesions. To pursue our aim we investigated a large sample (n. 346) of right brain-damaged patients and healthy participants (n. 272) for the presence of gender effects in performing Corsi and Digit Test. We also assessed a subgroup of patients (n. 109) for the nature (active vs. passive) of working memory tasks. We tested working memory (WM) administering the Corsi Test (CBT) and the Digit Span (DS) using two different versions: forward (fCBT and fDS), subjects were required to repeat stimuli in the same order that they were presented; and backward (bCBT and bDS), subjects were required to repeat stimuli in the opposite order of presentation. In this way, passive storage and active processing of working memory were assessed. Our results showed the persistence of gender-related effects in spite of the presence of right brain lesions. We found that men outperformed women both in CBT and DS, regardless of active and passive processing of verbal and visuo-spatial stimuli. The presence of visuo-spatial disorders (i.e., hemineglect) can affect the performance on Corsi Test. In our sample, men and women were equally affected by hemineglect, therefore it did not mask the gender effect. Generally speaking, the persistence of the men’s superiority in visuo-spatial tasks may be interpreted as a protective factor, at least for men, within other life factors such as level of education or kind of profession before retirement. PMID:27445734

  15. A new modified listening span task to enhance validity of working memory assessment for people with and without aphasia.

    PubMed

    Ivanova, Maria V; Hallowell, Brooke

    2014-01-01

    Deficits in working memory (WM) are an important subset of cognitive processing deficits associated with aphasia. However, there are serious limitations to research on WM in aphasia largely due to the lack of an established valid measure of WM impairment for this population. The aim of the current study was to address shortcomings of previous measures by developing and empirically evaluating a novel WM task with a sentence-picture matching processing component designed to circumvent confounds inherent in existing measures of WM in aphasia. The novel WM task was presented to persons with (n=27) and without (n=33) aphasia. Results demonstrated high concurrent validity of a novel WM task. Individuals with aphasia performed significantly worse on all conditions of the WM task compared to individuals without aphasia. Different patterns of performance across conditions were observed for the two groups. Additionally, WM capacity was significantly related to auditory comprehension abilities in individuals with mild aphasia but not those with moderate aphasia. Strengths of the novel WM task are that it allows for differential control for length versus complexity of verbal stimuli and indexing of the relative influence of each, minimizes metalinguistic requirements, enables control for complexity of processing components, allows participants to respond with simple gestures or verbally, and eliminates reading requirements. Results support the feasibility and validity of using a novel task to assess WM in individuals with and without aphasia. Readers will be able to (1) discuss the limitations of current working memory measures for individuals with aphasia; (2) describe how task design features of a new working memory task for people with aphasia address shortcomings of existing measures; (3) summarize the evidence supporting the validity of the novel working memory task. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Aged Tg2576 mice are impaired on social memory and open field habituation tests.

    PubMed

    Deacon, R M J; Koros, E; Bornemann, K D; Rawlins, J N P

    2009-02-11

    In a previous publication [Deacon RMJ, Cholerton LL, Talbot K, Nair-Roberts RG, Sanderson DJ, Romberg C, et al. Age-dependent and -independent behavioral deficits in Tg2576 mice. Behav Brain Res 2008;189:126-38] we found that very few cognitive tests were suitable for demonstrating deficits in Tg2576 mice, an amyloid over-expression model of Alzheimer's disease, even at 23 months of age. However, in a retrospective analysis of a separate project on these mice, tests of social memory and open field habituation revealed large cognitive impairments. Controls showed good open field habituation, but Tg2576 mice were hyperactive and failed to habituate. In the test of social memory for a juvenile mouse, controls showed considerably less social investigation on the second meeting, indicating memory of the juvenile, whereas Tg2576 mice did not show this decrement.As a control for olfactory sensitivity, on which social memory relies, the ability to find a food pellet hidden under wood chip bedding was assessed. Tg2576 mice found the pellet as quickly as controls. As this test requires digging ability, this was independently assessed in tests of burrowing and directly observed digging. In line with previous results and the hippocampal dysfunction characteristic of aged Tg2576 mice, they both burrowed and dug less than controls.

  17. Crystallographic and general use programs for the XDS Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Snyder, R. L.

    1973-01-01

    Programs in basic FORTRAN 4 are described, which fall into three catagories: (1) interactive programs to be executed under time sharing (BTM); (2) non interactive programs which are executed in batch processing mode (BPM); and (3) large non interactive programs which require more memory than is available in the normal BPM/BTM operating system and must be run overnight on a special system called XRAY which releases about 45,000 words of memory to the user. Programs in catagories (1) and (2) are stored as FORTRAN source files in the account FSNYDER. Programs in catagory (3) are stored in the XRAY system as load modules. The type of file in account FSNYDER is identified by the first two letters in the name.

  18. Exploration versus exploitation in space, mind, and society

    PubMed Central

    Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.

    2015-01-01

    Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less

  20. Associative learning performance is impaired in zebrafish (Danio rerio) by the NMDA-R antagonist MK-801

    PubMed Central

    Sison, Margarette; Gerlai, Robert

    2011-01-01

    The zebrafish is gaining popularity in behavioral neuroscience perhaps because of a promise of efficient large scale mutagenesis and drug screens that could identify a substantial number of yet undiscovered molecular players involved in complex traits. Learning and memory are complex functions of the brain and the analysis of their mechanisms may benefit from such large scale zebrafish screens. One bottleneck in this research is the paucity of appropriate behavioral screening paradigms, which may be due to the relatively uncharacterized nature of the behavior of this species. Here we show that zebrafish exhibit good learning performance in a task adapted from the mammalian literature, a plus maze in which zebrafish are required to associate a neutral visual stimulus with the presence of conspecifics, the rewarding unconditioned stimulus. Furthermore, we show that MK-801, a non-competitive NMDA-R antagonist, impairs memory performance in this maze when administered right after training or just before recall but not when given before training at a dose that does not impair motor function, perception or motivation. These results suggest that the plus maze associative learning paradigm has face and construct validity and that zebrafish may become an appropriate and translationally relevant study species for the analysis of the mechanisms of vertebrate, including mammalian, learning and memory. PMID:21596149

  1. A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory

    NASA Astrophysics Data System (ADS)

    Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin

    2015-09-01

    Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.

  2. Accessing Information in Working Memory: Can the Focus of Attention Grasp Two Elements at the Same Time?

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Bialkova, Svetlana

    2009-01-01

    Processing information in working memory requires selective access to a subset of working-memory contents by a focus of attention. Complex cognition often requires joint access to 2 items in working memory. How does the focus select 2 items? Two experiments with an arithmetic task and 1 with a spatial task investigate time demands for successive…

  3. Quantum random access memory.

    PubMed

    Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo

    2008-04-25

    A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.

  4. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  5. Masked multichannel analyzer

    DOEpatents

    Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.

    1984-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  6. Masked multichannel analyzer

    DOEpatents

    Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.

    1986-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  7. Barnes maze testing strategies with small and large rodent models.

    PubMed

    Rosenfeld, Cheryl S; Ferguson, Sherry A

    2014-02-26

    Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.

  8. Decomposing the relationship between cognitive functioning and self-referent memory beliefs in older adulthood: What’s memory got to do with it?

    PubMed Central

    Payne, Brennan R.; Gross, Alden L.; Hill, Patrick L.; Parisi, Jeanine M.; Rebok, George W.; Stine-Morrow, Elizabeth A. L.

    2018-01-01

    With advancing age, episodic memory performance shows marked declines along with concurrent reports of lower subjective memory beliefs. Given that normative age-related declines in episodic memory co-occur with declines in other cognitive domains, we examined the relationship between memory beliefs and multiple domains of cognitive functioning. Confirmatory bi-factor structural equation models were used to parse the shared and independent variance among factors representing episodic memory, psychomotor speed, and executive reasoning in one large cohort study (Senior Odyssey, N = 462), and replicated using another large cohort of healthy older adults (ACTIVE, N = 2,802). Accounting for a general fluid cognitive functioning factor (comprised of the shared variance among measures of episodic memory, speed, and reasoning) attenuated the relationship between objective memory performance and subjective memory beliefs in both samples. Moreover, the general cognitive functioning factor was the strongest predictor of memory beliefs in both samples. These findings are consistent with the notion that dispositional memory beliefs may reflect perceptions of cognition more broadly. This may be one reason why memory beliefs have broad predictive validity for interventions that target fluid cognitive ability. PMID:27685541

  9. Decomposing the relationship between cognitive functioning and self-referent memory beliefs in older adulthood: what's memory got to do with it?

    PubMed

    Payne, Brennan R; Gross, Alden L; Hill, Patrick L; Parisi, Jeanine M; Rebok, George W; Stine-Morrow, Elizabeth A L

    2017-07-01

    With advancing age, episodic memory performance shows marked declines along with concurrent reports of lower subjective memory beliefs. Given that normative age-related declines in episodic memory co-occur with declines in other cognitive domains, we examined the relationship between memory beliefs and multiple domains of cognitive functioning. Confirmatory bi-factor structural equation models were used to parse the shared and independent variance among factors representing episodic memory, psychomotor speed, and executive reasoning in one large cohort study (Senior Odyssey, N = 462), and replicated using another large cohort of healthy older adults (ACTIVE, N = 2802). Accounting for a general fluid cognitive functioning factor (comprised of the shared variance among measures of episodic memory, speed, and reasoning) attenuated the relationship between objective memory performance and subjective memory beliefs in both samples. Moreover, the general cognitive functioning factor was the strongest predictor of memory beliefs in both samples. These findings are consistent with the notion that dispositional memory beliefs may reflect perceptions of cognition more broadly. This may be one reason why memory beliefs have broad predictive validity for interventions that target fluid cognitive ability.

  10. The cortisol awakening response and memory performance in older men and women.

    PubMed

    Almela, Mercedes; van der Meij, Leander; Hidalgo, Vanesa; Villada, Carolina; Salvador, Alicia

    2012-12-01

    The activity and regulation of the hypothalamus-pituitary-adrenal axis has been related to cognitive decline during aging. This study investigated whether the cortisol awakening response (CAR) is related to memory performance among older adults. The sample was composed of 88 participants (44 men and 44 women) from 55 to 77 years old. The memory assessment consisted of two tests measuring declarative memory (a paragraph recall test and a word list learning test) and two tests measuring working memory (a spatial span test and a spatial working memory test). Among those participants who showed the CAR on two consecutive days, we found that a greater CAR was related to poorer declarative memory performance in both men and women, and to better working memory performance only in men. The results of our study suggest that the relationship between CAR and memory performance is negative in men and women when memory performance is largely dependent on hippocampal functioning (i.e. declarative memory), and positive, but only in men, when memory performance is largely dependent on prefrontal cortex functioning (i.e. working memory). Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  12. Serotonin–mushroom body circuit modulating the formation of anesthesia-resistant memory in Drosophila

    PubMed Central

    Lee, Pei-Tseng; Lin, Hsuan-Wen; Chang, Yu-Hsuan; Fu, Tsai-Feng; Dubnau, Josh; Hirsh, Jay; Lee, Tzumin; Chiang, Ann-Shyn

    2011-01-01

    Pavlovian olfactory learning in Drosophila produces two genetically distinct forms of intermediate-term memories: anesthesia-sensitive memory, which requires the amnesiac gene, and anesthesia-resistant memory (ARM), which requires the radish gene. Here, we report that ARM is specifically enhanced or inhibited in flies with elevated or reduced serotonin (5HT) levels, respectively. The requirement for 5HT was additive with the memory defect of the amnesiac mutation but was occluded by the radish mutation. This result suggests that 5HT and Radish protein act on the same pathway for ARM formation. Three supporting lines of evidence indicate that ARM formation requires 5HT released from only two dorsal paired medial (DPM) neurons onto the mushroom bodies (MBs), the olfactory learning and memory center in Drosophila: (i) DPM neurons were 5HT-antibody immunopositive; (ii) temporal inhibition of 5HT synthesis or release from DPM neurons, but not from other serotonergic neurons, impaired ARM formation; (iii) knocking down the expression of d5HT1A serotonin receptors in α/β MB neurons, which are innervated by DPM neurons, inhibited ARM formation. Thus, in addition to the Amnesiac peptide required for anesthesia-sensitive memory formation, the two DPM neurons also release 5HT acting on MB neurons for ARM formation. PMID:21808003

  13. The Visual Orientation Memory of "Drosophila" Requires Foraging (PKG) Upstream of Ignorant (RSK2) in Ring Neurons of the Central Complex

    ERIC Educational Resources Information Center

    Kuntz, Sara; Poeck, Burkhard; Sokolowski, Marla B.; Strauss, Roland

    2012-01-01

    Orientation and navigation in a complex environment requires path planning and recall to exert goal-driven behavior. Walking "Drosophila" flies possess a visual orientation memory for attractive targets which is localized in the central complex of the adult brain. Here we show that this type of working memory requires the cGMP-dependent protein…

  14. The w-effect in interferometric imaging: from a fast sparse measurement operator to superresolution

    NASA Astrophysics Data System (ADS)

    Dabbech, A.; Wolz, L.; Pratley, L.; McEwen, J. D.; Wiaux, Y.

    2017-11-01

    Modern radio telescopes, such as the Square Kilometre Array, will probe the radio sky over large fields of view, which results in large w-modulations of the sky image. This effect complicates the relationship between the measured visibilities and the image under scrutiny. In algorithmic terms, it gives rise to massive memory and computational time requirements. Yet, it can be a blessing in terms of reconstruction quality of the sky image. In recent years, several works have shown that large w-modulations promote the spread spectrum effect. Within the compressive sensing framework, this effect increases the incoherence between the sensing basis and the sparsity basis of the signal to be recovered, leading to better estimation of the sky image. In this article, we revisit the w-projection approach using convex optimization in realistic settings, where the measurement operator couples the w-terms in Fourier and the de-gridding kernels. We provide sparse, thus fast, models of the Fourier part of the measurement operator through adaptive sparsification procedures. Consequently, memory requirements and computational cost are significantly alleviated at the expense of introducing errors on the radio interferometric data model. We present a first investigation of the impact of the sparse variants of the measurement operator on the image reconstruction quality. We finally analyse the interesting superresolution potential associated with the spread spectrum effect of the w-modulation, and showcase it through simulations. Our c++ code is available online on GitHub.

  15. How Does Knowledge Promote Memory? The Distinctiveness Theory of Skilled Memory

    ERIC Educational Resources Information Center

    Rawson, Katherine A.; Van Overschelde, James P.

    2008-01-01

    The robust effects of knowledge on memory for domain-relevant information reported in previous research have largely been attributed to improved organizational processing. The present research proposes the distinctiveness theory of skilled memory, which states that knowledge improves memory not only through improved organizational processing but…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  17. A fast Fourier transform on multipoles (FFTM) algorithm for solving Helmholtz equation in acoustics analysis.

    PubMed

    Ong, Eng Teo; Lee, Heow Pueh; Lim, Kian Meng

    2004-09-01

    This article presents a fast algorithm for the efficient solution of the Helmholtz equation. The method is based on the translation theory of the multipole expansions. Here, the speedup comes from the convolution nature of the translation operators, which can be evaluated rapidly using fast Fourier transform algorithms. Also, the computations of the translation operators are accelerated by using the recursive formulas developed recently by Gumerov and Duraiswami [SIAM J. Sci. Comput. 25, 1344-1381(2003)]. It is demonstrated that the algorithm can produce good accuracy with a relatively low order of expansion. Efficiency analyses of the algorithm reveal that it has computational complexities of O(Na), where a ranges from 1.05 to 1.24. However, this method requires substantially more memory to store the translation operators as compared to the fast multipole method. Hence, despite its simplicity in implementation, this memory requirement issue may limit the application of this algorithm to solving very large-scale problems.

  18. High efficiency coherent optical memory with warm rubidium vapour

    PubMed Central

    Hosseini, M.; Sparkes, B.M.; Campbell, G.; Lam, P.K.; Buchler, B.C.

    2011-01-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory. PMID:21285952

  19. High efficiency coherent optical memory with warm rubidium vapour.

    PubMed

    Hosseini, M; Sparkes, B M; Campbell, G; Lam, P K; Buchler, B C

    2011-02-01

    By harnessing aspects of quantum mechanics, communication and information processing could be radically transformed. Promising forms of quantum information technology include optical quantum cryptographic systems and computing using photons for quantum logic operations. As with current information processing systems, some form of memory will be required. Quantum repeaters, which are required for long distance quantum key distribution, require quantum optical memory as do deterministic logic gates for optical quantum computing. Here, we present results from a coherent optical memory based on warm rubidium vapour and show 87% efficient recall of light pulses, the highest efficiency measured to date for any coherent optical memory suitable for quantum information applications. We also show storage and recall of up to 20 pulses from our system. These results show that simple warm atomic vapour systems have clear potential as a platform for quantum memory.

  20. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  1. Still searching for the engram

    PubMed Central

    Eichenbaum, Howard

    2016-01-01

    For nearly a century neurobiologists have searched for the engram - the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories. PMID:26944423

  2. Method and device for maximizing memory system bandwidth by accessing data in a dynamically determined order

    NASA Technical Reports Server (NTRS)

    Schwab, Andrew J. (Inventor); Aylor, James (Inventor); Hitchcock, Charles Young (Inventor); Wulf, William A. (Inventor); McKee, Sally A. (Inventor); Moyer, Stephen A. (Inventor); Klenke, Robert (Inventor)

    2000-01-01

    A data processing system is disclosed which comprises a data processor and memory control device for controlling the access of information from the memory. The memory control device includes temporary storage and decision ability for determining what order to execute the memory accesses. The compiler detects the requirements of the data processor and selects the data to stream to the memory control device which determines a memory access order. The order in which to access said information is selected based on the location of information stored in the memory. The information is repeatedly accessed from memory and stored in the temporary storage until all streamed information is accessed. The information is stored until required by the data processor. The selection of the order in which to access information maximizes bandwidth and decreases the retrieval time.

  3. A Critical Role for the Nucleus Reuniens in Long-Term, But Not Short-Term Associative Recognition Memory Formation.

    PubMed

    Barker, Gareth R I; Warburton, Elizabeth Clea

    2018-03-28

    Recognition memory for single items requires the perirhinal cortex (PRH), whereas recognition of an item and its associated location requires a functional interaction among the PRH, hippocampus (HPC), and medial prefrontal cortex (mPFC). Although the precise mechanisms through which these interactions are effected are unknown, the nucleus reuniens (NRe) has bidirectional connections with each regions and thus may play a role in recognition memory. Here we investigated, in male rats, whether specific manipulations of NRe function affected performance of recognition memory for single items, object location, or object-in-place associations. Permanent lesions in the NRe significantly impaired long-term, but not short-term, object-in-place associative recognition memory, whereas single item recognition memory and object location memory were unaffected. Temporary inactivation of the NRe during distinct phases of the object-in-place task revealed its importance in both the encoding and retrieval stages of long-term associative recognition memory. Infusions of specific receptor antagonists showed that encoding was dependent on muscarinic and nicotinic cholinergic neurotransmission, whereas NMDA receptor neurotransmission was not required. Finally, we found that long-term object-in-place memory required protein synthesis within the NRe. These data reveal a specific role for the NRe in long-term associative recognition memory through its interactions with the HPC and mPFC, but not the PRH. The delay-dependent involvement of the NRe suggests that it is not a simple relay station between brain regions, but, rather, during high mnemonic demand, facilitates interactions between the mPFC and HPC, a process that requires both cholinergic neurotransmission and protein synthesis. SIGNIFICANCE STATEMENT Recognizing an object and its associated location, which is fundamental to our everyday memory, requires specific hippocampal-cortical interactions, potentially facilitated by the nucleus reuniens (NRe) of the thalamus. However, the role of the NRe itself in associative recognition memory is unknown. Here, we reveal the crucial role of the NRe in encoding and retrieval of long-term object-in-place memory, but not for remembrance of an individual object or individual location and such involvement is cholinergic receptor and protein synthesis dependent. This is the first demonstration that the NRe is a key node within an associative recognition memory network and is not just a simple relay for information within the network. Rather, we argue, the NRe actively modulates information processing during long-term associative memory formation. Copyright © 2018 the authors 0270-6474/18/383208-10$15.00/0.

  4. Impairing existing declarative memory in humans by disrupting reconsolidation

    PubMed Central

    Chan, Jason C. K.; LaPaglia, Jessica A.

    2013-01-01

    During the past decade, a large body of research has shown that memory traces can become labile upon retrieval and must be restabilized. Critically, interrupting this reconsolidation process can abolish a previously stable memory. Although a large number of studies have demonstrated this reconsolidation associated amnesia in nonhuman animals, the evidence for its occurrence in humans is far less compelling, especially with regard to declarative memory. In fact, reactivating a declarative memory often makes it more robust and less susceptible to subsequent disruptions. Here we show that existing declarative memories can be selectively impaired by using a noninvasive retrieval–relearning technique. In six experiments, we show that this reconsolidation-associated amnesia can be achieved 48 h after formation of the original memory, but only if relearning occurred soon after retrieval. Furthermore, the amnesic effect persists for at least 24 h, cannot be attributed solely to source confusion and is attainable only when relearning targets specific existing memories for impairment. These results demonstrate that human declarative memory can be selectively rewritten during reconsolidation. PMID:23690586

  5. A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo

    DOE PAGES

    Zhao, Luning; Neuscamman, Eric

    2017-05-17

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  6. Shape-Memory Effect and Pseudoelasticity in Fe-Mn-Based Alloys

    NASA Astrophysics Data System (ADS)

    La Roca, P.; Baruj, A.; Sade, M.

    2017-03-01

    Several Fe-based alloys are being considered as potential candidates for applications which require shape-memory behavior or superelastic properties. The possibility of using fabrication methods which are well known in the steel industry is very attractive and encourages a large amount of research in the field. In the present article, Fe-Mn-based alloys are mainly addressed. On the one hand, attention is paid to the shape-memory effect where the alloys contain (a) a maximum amount of Mn up to around 30 wt%, (b) several possible substitutional elements like Si, Cr, Ni, Co, and Nb and (c) some possible interstitial elements like C. On the other hand, superelastic alloys are analyzed, mainly the Fe-Mn-Al-Ni system discovered a few years ago. The most noticeable properties resulting from the martensitic transformations which are responsible for the mentioned properties, i.e., the fcc-hcp in the first case and the bcc-fcc in the latter are discussed. Selected potential applications are also analyzed.

  7. Ultralow-power switching via defect engineering in germanium telluride phase-change memory devices.

    PubMed

    Nukala, Pavan; Lin, Chia-Chun; Composto, Russell; Agarwal, Ritesh

    2016-01-25

    Crystal-amorphous transformation achieved via the melt-quench pathway in phase-change memory involves fundamentally inefficient energy conversion events; and this translates to large switching current densities, responsible for chemical segregation and device degradation. Alternatively, introducing defects in the crystalline phase can engineer carrier localization effects enhancing carrier-lattice coupling; and this can efficiently extract work required to introduce bond distortions necessary for amorphization from input electrical energy. Here, by pre-inducing extended defects and thus carrier localization effects in crystalline GeTe via high-energy ion irradiation, we show tremendous improvement in amorphization current densities (0.13-0.6 MA cm(-2)) compared with the melt-quench strategy (∼50 MA cm(-2)). We show scaling behaviour and good reversibility on these devices, and explore several intermediate resistance states that are accessible during both amorphization and recrystallization pathways. Existence of multiple resistance states, along with ultralow-power switching and scaling capabilities, makes this approach promising in context of low-power memory and neuromorphic computation.

  8. Ultralow-power switching via defect engineering in germanium telluride phase-change memory devices

    PubMed Central

    Nukala, Pavan; Lin, Chia-Chun; Composto, Russell; Agarwal, Ritesh

    2016-01-01

    Crystal–amorphous transformation achieved via the melt-quench pathway in phase-change memory involves fundamentally inefficient energy conversion events; and this translates to large switching current densities, responsible for chemical segregation and device degradation. Alternatively, introducing defects in the crystalline phase can engineer carrier localization effects enhancing carrier–lattice coupling; and this can efficiently extract work required to introduce bond distortions necessary for amorphization from input electrical energy. Here, by pre-inducing extended defects and thus carrier localization effects in crystalline GeTe via high-energy ion irradiation, we show tremendous improvement in amorphization current densities (0.13–0.6 MA cm−2) compared with the melt-quench strategy (∼50 MA cm−2). We show scaling behaviour and good reversibility on these devices, and explore several intermediate resistance states that are accessible during both amorphization and recrystallization pathways. Existence of multiple resistance states, along with ultralow-power switching and scaling capabilities, makes this approach promising in context of low-power memory and neuromorphic computation. PMID:26805748

  9. Modulation of neuronal signal transduction and memory formation by synaptic zinc.

    PubMed

    Sindreu, Carlos; Storm, Daniel R

    2011-01-01

    The physiological role of synaptic zinc has remained largely enigmatic since its initial detection in hippocampal mossy fibers over 50 years ago. The past few years have witnessed a number of studies highlighting the ability of zinc ions to regulate ion channels and intracellular signaling pathways implicated in neuroplasticity, and others that shed some light on the elusive role of synaptic zinc in learning and memory. Recent behavioral studies using knock-out mice for the synapse-specific zinc transporter ZnT-3 indicate that vesicular zinc is required for the formation of memories dependent on the hippocampus and the amygdala, two brain centers that are prominently innervated by zinc-rich fibers. A common theme emerging from this research is the activity-dependent regulation of the Erk1/2 mitogen-activated-protein kinase pathway by synaptic zinc through diverse mechanisms in neurons. Here we discuss current knowledge on how synaptic zinc may play a role in cognition through its impact on neuronal signaling.

  10. Modulation of Neuronal Signal Transduction and Memory Formation by Synaptic Zinc

    PubMed Central

    Sindreu, Carlos; Storm, Daniel R.

    2011-01-01

    The physiological role of synaptic zinc has remained largely enigmatic since its initial detection in hippocampal mossy fibers over 50 years ago. The past few years have witnessed a number of studies highlighting the ability of zinc ions to regulate ion channels and intracellular signaling pathways implicated in neuroplasticity, and others that shed some light on the elusive role of synaptic zinc in learning and memory. Recent behavioral studies using knock-out mice for the synapse-specific zinc transporter ZnT-3 indicate that vesicular zinc is required for the formation of memories dependent on the hippocampus and the amygdala, two brain centers that are prominently innervated by zinc-rich fibers. A common theme emerging from this research is the activity-dependent regulation of the Erk1/2 mitogen-activated-protein kinase pathway by synaptic zinc through diverse mechanisms in neurons. Here we discuss current knowledge on how synaptic zinc may play a role in cognition through its impact on neuronal signaling. PMID:22084630

  11. Novel memory architecture for video signal processor

    NASA Astrophysics Data System (ADS)

    Hung, Jen-Sheng; Lin, Chia-Hsing; Jen, Chein-Wei

    1993-11-01

    An on-chip memory architecture for video signal processor (VSP) is proposed. This memory structure is a two-level design for the different data locality in video applications. The upper level--Memory A provides enough storage capacity to reduce the impact on the limitation of chip I/O bandwidth, and the lower level--Memory B provides enough data parallelism and flexibility to meet the requirements of multiple reconfigurable pipeline function units in a single VSP chip. The needed memory size is decided by the memory usage analysis for video algorithms and the number of function units. Both levels of memory adopted a dual-port memory scheme to sustain the simultaneous read and write operations. Especially, Memory B uses multiple one-read-one-write memory banks to emulate the real multiport memory. Therefore, one can change the configuration of Memory B to several sets of memories with variable read/write ports by adjusting the bus switches. Then the numbers of read ports and write ports in proposed memory can meet requirement of data flow patterns in different video coding algorithms. We have finished the design of a prototype memory design using 1.2- micrometers SPDM SRAM technology and will fabricated it through TSMC, in Taiwan.

  12. Rapid, experience-dependent translation of neurogranin enables memory encoding.

    PubMed

    Jones, Kendrick J; Templet, Sebastian; Zemoura, Khaled; Kuzniewska, Bozena; Pena, Franciso X; Hwang, Hongik; Lei, Ding J; Haensgen, Henny; Nguyen, Shannon; Saenz, Christopher; Lewis, Michael; Dziembowska, Magdalena; Xu, Weifeng

    2018-06-19

    Experience induces de novo protein synthesis in the brain and protein synthesis is required for long-term memory. It is important to define the critical temporal window of protein synthesis and identify newly synthesized proteins required for memory formation. Using a behavioral paradigm that temporally separates the contextual exposure from the association with fear, we found that protein synthesis during the transient window of context exposure is required for contextual memory formation. Among an array of putative activity-dependent translational neuronal targets tested, we identified one candidate, a schizophrenia-associated candidate mRNA, neurogranin (Ng, encoded by the Nrgn gene) responding to novel-context exposure. The Ng mRNA was recruited to the actively translating mRNA pool upon novel-context exposure, and its protein levels were rapidly increased in the hippocampus. By specifically blocking activity-dependent translation of Ng using virus-mediated molecular perturbation, we show that experience-dependent translation of Ng in the hippocampus is required for contextual memory formation. We further interrogated the molecular mechanism underlying the experience-dependent translation of Ng, and found that fragile-X mental retardation protein (FMRP) interacts with the 3'UTR of the Nrgn mRNA and is required for activity-dependent translation of Ng in the synaptic compartment and contextual memory formation. Our results reveal that FMRP-mediated, experience-dependent, rapid enhancement of Ng translation in the hippocampus during the memory acquisition enables durable context memory encoding. Copyright © 2018 the Author(s). Published by PNAS.

  13. Rapid, experience-dependent translation of neurogranin enables memory encoding

    PubMed Central

    Jones, Kendrick J.; Templet, Sebastian; Zemoura, Khaled; Pena, Franciso X.; Hwang, Hongik; Lei, Ding J.; Haensgen, Henny; Nguyen, Shannon; Saenz, Christopher; Lewis, Michael; Dziembowska, Magdalena

    2018-01-01

    Experience induces de novo protein synthesis in the brain and protein synthesis is required for long-term memory. It is important to define the critical temporal window of protein synthesis and identify newly synthesized proteins required for memory formation. Using a behavioral paradigm that temporally separates the contextual exposure from the association with fear, we found that protein synthesis during the transient window of context exposure is required for contextual memory formation. Among an array of putative activity-dependent translational neuronal targets tested, we identified one candidate, a schizophrenia-associated candidate mRNA, neurogranin (Ng, encoded by the Nrgn gene) responding to novel-context exposure. The Ng mRNA was recruited to the actively translating mRNA pool upon novel-context exposure, and its protein levels were rapidly increased in the hippocampus. By specifically blocking activity-dependent translation of Ng using virus-mediated molecular perturbation, we show that experience-dependent translation of Ng in the hippocampus is required for contextual memory formation. We further interrogated the molecular mechanism underlying the experience-dependent translation of Ng, and found that fragile-X mental retardation protein (FMRP) interacts with the 3′UTR of the Nrgn mRNA and is required for activity-dependent translation of Ng in the synaptic compartment and contextual memory formation. Our results reveal that FMRP-mediated, experience-dependent, rapid enhancement of Ng translation in the hippocampus during the memory acquisition enables durable context memory encoding. PMID:29880715

  14. Non Volatile Flash Memory Radiation Tests

    NASA Technical Reports Server (NTRS)

    Irom, Farokh; Nguyen, Duc N.; Allen, Greg

    2012-01-01

    Commercial flash memory industry has experienced a fast growth in the recent years, because of their wide spread usage in cell phones, mp3 players and digital cameras. On the other hand, there has been increased interest in the use of high density commercial nonvolatile flash memories in space because of ever increasing data requirements and strict power requirements. Because of flash memories complex structure; they cannot be treated as just simple memories in regards to testing and analysis. It becomes quite challenging to determine how they will respond in radiation environments.

  15. A Memory Efficient Network Encryption Scheme

    NASA Astrophysics Data System (ADS)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  16. Molecular Basis of 9G4 B Cell Autoreactivity in Human Systemic Lupus Erythematosus

    PubMed Central

    Richardson, Christopher; Chida, Asiya Seema; Adlowitz, Diana; Silver, Lin; Fox, Erin; Jenks, Scott A.; Palmer, Elise; Wang, Youliang; Heimburg-Molinaro, Jamie; Li, Quan-Zhen; Mohan, Chandra; Cummings, Richard; Tipton, Christopher

    2013-01-01

    9G4+ IgG Abs expand in systemic lupus erythematosus (SLE) in a disease-specific fashion and react with different lupus Ags including B cell Ags and apoptotic cells. Their shared use of VH4-34 represents a unique system to understand the molecular basis of lupus autoreactivity. In this study, a large panel of recombinant 9G4+ mAbs from single naive and memory cells was generated and tested against B cells, apoptotic cells, and other Ags. Mutagenesis eliminated the framework-1 hydrophobic patch (HP) responsible for the 9G4 idiotype. The expression of the HP in unselected VH4-34 cells was assessed by deep sequencing. We found that 9G4 Abs recognize several Ags following two distinct structural patterns. B cell binding is dependent on the HP, whereas anti-nuclear Abs, apoptotic cells, and dsDNA binding are HP independent and correlate with positively charged H chain third CDR. The majority of mutated VH4-34 memory cells retain the HP, thereby suggesting selection by Ags that require this germline structure. Our findings show that the germline-encoded HP is compulsory for the anti–B cell reactivity largely associated with 9G4 Abs in SLE but is not required for reactivity against apoptotic cells, dsDNA, chromatin, anti-nuclear Abs, or cardiolipin. Given that the lupus memory compartment contains a majority of HP+ VH4-34 cells but decreased B cell reactivity, additional HP-dependent Ags must participate in the selection of this compartment. This study represents the first analysis, to our knowledge, of VH-restricted autoreactive B cells specifically expanded in SLE and provides the foundation to understand the antigenic forces at play in this disease. PMID:24108696

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  18. Memory handling in the ATLAS submission system from job definition to sites limits

    NASA Astrophysics Data System (ADS)

    Forti, A. C.; Walker, R.; Maeno, T.; Love, P.; Rauschmayr, N.; Filipcic, A.; Di Girolamo, A.

    2017-10-01

    In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn’t a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

  19. Modulation of learning and memory by cytokines: signaling mechanisms and long term consequences.

    PubMed

    Donzis, Elissa J; Tronson, Natalie C

    2014-11-01

    This review describes the role of cytokines and their downstream signaling cascades on the modulation of learning and memory. Immune proteins are required for many key neural processes and dysregulation of these functions by systemic inflammation can result in impairments of memory that persist long after the resolution of inflammation. Recent research has demonstrated that manipulations of individual cytokines can modulate learning, memory, and synaptic plasticity. The many conflicting findings, however, have prevented a clear understanding of the precise role of cytokines in memory. Given the complexity of inflammatory signaling, understanding its modulatory role requires a shift in focus from single cytokines to a network of cytokine interactions and elucidation of the cytokine-dependent intracellular signaling cascades. Finally, we propose that whereas signal transduction and transcription may mediate short-term modulation of memory, long-lasting cellular and molecular mechanisms such as epigenetic modifications and altered neurogenesis may be required for the long lasting impact of inflammation on memory and cognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Oct1 and OCA-B are selectively required for CD4 memory T cell function.

    PubMed

    Shakya, Arvind; Goren, Alon; Shalek, Alex; German, Cody N; Snook, Jeremy; Kuchroo, Vijay K; Yosef, Nir; Chan, Raymond C; Regev, Aviv; Williams, Matthew A; Tantin, Dean

    2015-11-16

    Epigenetic changes are crucial for the generation of immunological memory. Failure to generate or maintain these changes will result in poor memory responses. Similarly, augmenting or stabilizing the correct epigenetic states offers a potential method of enhancing memory. Yet the transcription factors that regulate these processes are poorly defined. We find that the transcription factor Oct1 and its cofactor OCA-B are selectively required for the in vivo generation of CD4(+) memory T cells. More importantly, the memory cells that are formed do not respond properly to antigen reencounter. In vitro, both proteins are required to maintain a poised state at the Il2 target locus in resting but previously stimulated CD4(+) T cells. OCA-B is also required for the robust reexpression of multiple other genes including Ifng. ChIPseq identifies ∼50 differentially expressed direct Oct1 and OCA-B targets. We identify an underlying mechanism involving OCA-B recruitment of the histone lysine demethylase Jmjd1a to targets such as Il2, Ifng, and Zbtb32. The findings pinpoint Oct1 and OCA-B as central mediators of CD4(+) T cell memory. © 2015 Shakya et al.

  1. Oct1 and OCA-B are selectively required for CD4 memory T cell function

    PubMed Central

    Shakya, Arvind; Goren, Alon; Shalek, Alex; German, Cody N.; Snook, Jeremy; Kuchroo, Vijay K.; Yosef, Nir; Chan, Raymond C.; Regev, Aviv

    2015-01-01

    Epigenetic changes are crucial for the generation of immunological memory. Failure to generate or maintain these changes will result in poor memory responses. Similarly, augmenting or stabilizing the correct epigenetic states offers a potential method of enhancing memory. Yet the transcription factors that regulate these processes are poorly defined. We find that the transcription factor Oct1 and its cofactor OCA-B are selectively required for the in vivo generation of CD4+ memory T cells. More importantly, the memory cells that are formed do not respond properly to antigen reencounter. In vitro, both proteins are required to maintain a poised state at the Il2 target locus in resting but previously stimulated CD4+ T cells. OCA-B is also required for the robust reexpression of multiple other genes including Ifng. ChIPseq identifies ∼50 differentially expressed direct Oct1 and OCA-B targets. We identify an underlying mechanism involving OCA-B recruitment of the histone lysine demethylase Jmjd1a to targets such as Il2, Ifng, and Zbtb32. The findings pinpoint Oct1 and OCA-B as central mediators of CD4+ T cell memory. PMID:26481684

  2. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    PubMed

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  3. Static Memory Deduplication for Performance Optimization in Cloud Computing

    PubMed Central

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-01-01

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434

  4. Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.

    PubMed

    Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima

    2014-10-14

    Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.

  5. Prospects for Geostationary Doppler Weather Radar

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Fang, Houfei; Durden, Stephen L.; Im, Eastwood; Rhamat-Samii, Yahya

    2009-01-01

    A novel mission concept, namely NEXRAD in Space (NIS), was developed for detailed monitoring of hurricanes, cyclones, and severe storms from a geostationary orbit. This mission concept requires a space deployable 35-m diameter reflector that operates at 35-GHz with a surface figure accuracy requirement of 0.21 mm RMS. This reflector is well beyond the current state-of-the-art. To implement this mission concept, several potential technologies associated with large, lightweight, spaceborne reflectors have been investigated by this study. These spaceborne reflector technologies include mesh reflector technology, inflatable membrane reflector technology and Shape Memory Polymer reflector technology.

  6. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  7. In-situ TOF neutron diffraction studies of cyclic softening in superelasticity of a NiFeGaCo shape memory alloy

    DOE PAGES

    Yang, Hui; Yu, Dunji; Chen, Yan; ...

    2016-10-24

    Real-time in-situ neutron diffraction was conducted during uniaxial cycling compression of a Ni 49.3Fe 18Ga 27Co 5.7 shape memory alloy to explore the mechanism on its superelasticity at room temperature, which was manifested by the almost recoverable large strains and the apparent cyclic softening. Based on the Rietveld refinements, the real-time evolution of volume fraction of martensite was in-situ monitored, indicating the incremental amount of residual martensite with increasing load cycles. Real-time changes in intensities and lattice strains of { hkl} reflections for individual phase were obtained through fitting individual peaks, which reveal the quantitative information on phase transformation kineticsmore » as a function of grain orientation and stress/strain partitioning. Moreover, a large compressive residual stress was evidenced in the parent phase, which should be balanced by the residual martensite after the second unloading cycle. As a result, the large compressive residual stress found in the parent austenite phase may account for the cyclic effect on critical stress required for triggering the martensitic transformation in the subsequent loading.« less

  8. Shared versus distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.

  9. Use of an eight-arm radial water maze to assess working and reference memory following neonatal brain injury.

    PubMed

    Penley, Stephanie C; Gaudet, Cynthia M; Threlkeld, Steven W

    2013-12-04

    Working and reference memory are commonly assessed using the land based radial arm maze. However, this paradigm requires pretraining, food deprivation, and may introduce scent cue confounds. The eight-arm radial water maze is designed to evaluate reference and working memory performance simultaneously by requiring subjects to use extra-maze cues to locate escape platforms and remedies the limitations observed in land based radial arm maze designs. Specifically, subjects are required to avoid the arms previously used for escape during each testing day (working memory) as well as avoid the fixed arms, which never contain escape platforms (reference memory). Re-entries into arms that have already been used for escape during a testing session (and thus the escape platform has been removed) and re-entries into reference memory arms are indicative of working memory deficits. Alternatively, first entries into reference memory arms are indicative of reference memory deficits. We used this maze to compare performance of rats with neonatal brain injury and sham controls following induction of hypoxia-ischemia and show significant deficits in both working and reference memory after eleven days of testing. This protocol could be easily modified to examine many other models of learning impairment.

  10. Architecture of security management unit for safe hosting of multiple agents

    NASA Astrophysics Data System (ADS)

    Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques

    1999-04-01

    In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.

  11. Replacement of the Faces subtest by Visual Reproductions within Wechsler Memory Scale-Third Edition (WMS-III) visual memory indexes: implications for discrepancy analysis.

    PubMed

    Hawkins, Keith A; Tulsky, David S

    2004-06-01

    Within discrepancy analysis differences between scores are examined for abnormality. Although larger differences are generally associated with rising impairment probabilities, the relationship between discrepancy size and abnormality varies across score pairs in relation to the correlation between the contrasted scores in normal subjects. Examinee ability level also affects the size of discrepancies observed normally. Wechsler Memory Scale-Third Edition (WMS-III) visual index scores correlate only modestly with other Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) and WMS-III index scores; consequently, differences between these scores and others have to be very large before they become unusual, especially for subjects of higher intelligence. The substitution of the Faces subtest by Visual Reproductions within visual memory indexes formed by the combination of WMS-III visual subtests (creating immediate recall, delayed recall, and combined immediate and delayed index scores) results in higher correlation coefficients, and a decline in the discrepancy size required to surpass base rate thresholds for probable impairment. This gain appears not to occur at the cost of a diminished sensitivity to diverse pathologies. New WMS-III discrepancy base rate data are supplied to complement those currently available to clinicians.

  12. Neural basis for generalized quantifier comprehension.

    PubMed

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  13. Processing speed and working memory span: their differential role in superficial and deep memory processes in schizophrenia.

    PubMed

    Brébion, Gildas; Bressan, Rodrigo A; Pilowsky, Lyn S; David, Anthony S

    2011-05-01

    Previous work has suggested that decrement in both processing speed and working memory span plays a role in the memory impairment observed in patients with schizophrenia. We undertook a study to examine simultaneously the effect of these two factors. A sample of 49 patients with schizophrenia and 43 healthy controls underwent a battery of verbal and visual memory tasks. Superficial and deep encoding memory measures were tallied. We conducted regression analyses on the various memory measures, using processing speed and working memory span as independent variables. In the patient group, processing speed was a significant predictor of superficial and deep memory measures in verbal and visual memory. Working memory span was an additional significant predictor of the deep memory measures only. Regression analyses involving all participants revealed that the effect of diagnosis on all the deep encoding memory measures was reduced to non-significance when processing speed was entered in the regression. Decreased processing speed is involved in verbal and visual memory deficit in patients, whether the task require superficial or deep encoding. Working memory is involved only insofar as the task requires a certain amount of effort.

  14. Longitudinal growth and morphology of the hippocampus through childhood: Impact of prematurity and implications for memory and learning.

    PubMed

    Thompson, Deanne K; Omizzolo, Cristina; Adamson, Christopher; Lee, Katherine J; Stargatt, Robyn; Egan, Gary F; Doyle, Lex W; Inder, Terrie E; Anderson, Peter J

    2014-08-01

    The effects of prematurity on hippocampal development through early childhood are largely unknown. The aims of this study were to (1) compare the shape of the very preterm (VPT) hippocampus to that of full-term (FT) children at 7 years of age, and determine if hippocampal shape is associated with memory and learning impairment in VPT children, (2) compare change in shape and volume of the hippocampi from term-equivalent to 7 years of age between VPT and FT children, and determine if development of the hippocampi over time predicts memory and learning impairment in VPT children. T1 and T2 magnetic resonance images were acquired at both term equivalent and 7 years of age in 125 VPT and 25 FT children. Hippocampi were manually segmented and shape was characterized by boundary point distribution models at both time-points. Memory and learning outcomes were measured at 7 years of age. The VPT group demonstrated less hippocampal infolding than the FT group at 7 years. Hippocampal growth between infancy and 7 years was less in the VPT compared with the FT group, but the change in shape was similar between groups. There was little evidence that the measures of hippocampal development were related to memory and learning impairments in the VPT group. This study suggests that the developmental trajectory of the human hippocampus is altered in VPT children, but this does not predict memory and learning impairment. Further research is required to elucidate the mechanisms for memory and learning difficulties in VPT children. Copyright © 2014 Wiley Periodicals, Inc.

  15. Antagonism at NMDA receptors, but not β-adrenergic receptors, disrupts the reconsolidation of pavlovian conditioned approach and instrumental transfer for ethanol-associated conditioned stimuli.

    PubMed

    Milton, Amy L; Schramm, Moritz J W; Wawrzynski, James R; Gore, Felicity; Oikonomou-Mpegeti, Faye; Wang, Nancy Q; Samuel, Daniel; Economidou, Daina; Everitt, Barry J

    2012-02-01

    Reconsolidation is the process by which memories require restabilisation following destabilisation at retrieval. Since even old, well-established memories become susceptible to disruption following reactivation, treatments based upon disrupting reconsolidation could provide a novel form of therapy for neuropsychiatric disorders based upon maladaptive memories, such as drug addiction. Pavlovian cues are potent precipitators of relapse to drug-seeking behaviour and influence instrumental drug seeking through at least three psychologically and neurobiologically distinct processes: conditioned reinforcement, conditioned approach (autoshaping) and conditioned motivation (pavlovian-instrumental transfer or PIT). We have previously demonstrated that the reconsolidation of memories underlying the conditioned reinforcing properties of drug cues depends upon NMDA receptor (NMDAR)- and β-adrenergic receptor (βAR)-mediated signalling. However, it is unknown whether the drug cue memory representations underlying conditioned approach and PIT depend upon the same mechanisms. Using orally self-administered ethanol as a reinforcer in two separate experiments, we investigated whether the reconsolidation of the memories underlying conditioned approach and PIT requires βAR- and NMDAR-dependent neurotransmission. For ethanol self-administering but non-dependent rats, the memories underlying conditioned approach and PIT for a pavlovian drug cue were disrupted by the administration of the NMDAR antagonist MK-801, but not the administration of the βAR antagonist propranolol, when given in conjunction with memory reactivation. As for natural reinforcers, NMDARs are required for the reconsolidation of all aspects of pavlovian drug memories, but βARs are only required for the memory representation underlying conditioned reinforcement. These results indicate the potential utility of treatments based upon disrupting cue-drug memory reconsolidation in preventing relapse.

  16. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  17. Integrated Optical Information Processing

    DTIC Science & Technology

    1988-08-01

    applications in optical disk memory systems [91. This device is constructed in a glass /SiO2/Si waveguide. The choice of a Si substrate allows for the...contact mask) were formed in the photoresist deposited on all of the samples, we covered the unwanted gratings on each sample with cover glass slides...processing, let us consider TeO2 (v, = 620 m/s) as a potential substrate for applications requiring large time delays. This con- sideration is despite

  18. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.8

    DTIC Science & Technology

    2013-06-28

    be familiar with UNIX; BASH shell programming; and remote sensing, particularly regarding computer processing of satellite data. The system memory ...and storage requirements are difficult to gauge. The amount of memory needed is dependent upon the amount and type of satellite data you wish to...process; the larger the area, the larger the memory requirement. For example, the entire Atlantic Ocean will require more processing power than the

  19. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  20. Context Memory Formation Requires Activity-Dependent Protein Degradation in the Hippocampus

    ERIC Educational Resources Information Center

    Cullen, Patrick K.; Ferrara, Nicole C.; Pullins, Shane E.; Helmstetter, Fred J.

    2017-01-01

    Numerous studies have indicated that the consolidation of contextual fear memories supported by an aversive outcome like footshock requires de novo protein synthesis as well as protein degradation mediated by the ubiquitin-proteasome system (UPS). Context memory formed in the absence of an aversive stimulus by simple exposure to a novel…

  1. ERP C250 shows the elderly (cognitively normal, Alzheimer's disease) store more stimuli in short-term memory than Young Adults do.

    PubMed

    Chapman, Robert M; Gardner, Margaret N; Mapstone, Mark; Klorman, Rafael; Porsteinsson, Anton P; Dupree, Haley M; Antonsdottir, Inga M; Kamalyan, Lily

    2016-06-01

    To determine how aging and dementia affect the brain's initial storing of task-relevant and irrelevant information in short-term memory. We used brain Event-Related Potentials (ERPs) to measure short-term memory storage (ERP component C250) in 36 Young Adults, 36 Normal Elderly, and 36 early-stage AD subjects. Participants performed the Number-Letter task, a cognitive paradigm requiring memory storage of a first relevant stimulus to compare it with a second stimulus. In Young Adults, C250 was more positive for the first task-relevant stimulus compared to all other stimuli. C250 in Normal Elderly and AD subjects was roughly the same to relevant and irrelevant stimuli in Intratrial Parts 1-3 but not 4. The AD group had lower C250 to relevant stimuli in part 1. Both normal aging and dementia cause less differentiation of relevant from irrelevant information in initial storage. There was a large aging effect involving differences in the pattern of C250 responses of the Young Adult versus the Normal Elderly/AD groups. Also, a potential dementia effect was obtained. C250 is a candidate tool for measuring short-term memory performance on a biological level, as well as a potential marker for memory changes due to normal aging and dementia. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. B Cells and B Cell Blasts Withstand Cryopreservation While Retaining Their Functionality for Producing Antibody.

    PubMed

    Fecher, Philipp; Caspell, Richard; Naeem, Villian; Karulin, Alexey Y; Kuerten, Stefanie; Lehmann, Paul V

    2018-05-31

    In individuals who have once developed humoral immunity to an infectious/foreign antigen, the antibodies present in their body can mediate instant protection when the antigen re-enters. Such antigen-specific antibodies can be readily detected in the serum. Long term humoral immunity is, however, also critically dependent on the ability of memory B cells to engage in a secondary antibody response upon re-exposure to the antigen. Antibody molecules in the body are short lived, having a half-life of weeks, while memory B cells have a life span of decades. Therefore, the presence of serum antibodies is not always a reliable indicator of B cell memory and comprehensive monitoring of humoral immunity requires that both serum antibodies and memory B cells be assessed. The prevailing view is that resting memory B cells and B cell blasts in peripheral blood mononuclear cells (PBMC) cannot be cryopreserved without losing their antibody secreting function, and regulated high throughput immune monitoring of B cell immunity is therefore confined to-and largely limited by-the need to test freshly isolated PBMC. Using optimized protocols for freezing and thawing of PBMC, and four color ImmunoSpot ® analysis for the simultaneous detection of all immunoglobulin classes/subclasses we show here that both resting memory B cells and B cell blasts retain their ability to secrete antibody after thawing, and thus demonstrate the feasibility of B cell immune monitoring using cryopreserved PBMC.

  3. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  4. New trends in logic synthesis for both digital designing and data processing

    NASA Astrophysics Data System (ADS)

    Borowik, Grzegorz; Łuba, Tadeusz; Poźniak, Krzysztof

    2016-09-01

    FPGA devices are equipped with memory-based structures. These memories act as very large logic cells where the number of inputs equals the number of address lines. At the same time, there is a huge demand in the market of Internet of Things for devices implementing virtual routers, intrusion detection systems, etc.; where such memories are crucial for realizing pattern matching circuits, IP address tables, and other. Unfortunately, existing CAD tools are not well suited to utilize capabilities that such large memory blocks offer due to the lack of appropriate synthesis procedures. This paper presents methods which are useful for memory-based implementations: minimization of the number of input variables and functional decomposition.

  5. Earliest Memories and Recent Memories of Highly Salient Events--Are They Similar?

    ERIC Educational Resources Information Center

    Peterson, Carole; Fowler, Tania; Brandeau, Katherine M.

    2015-01-01

    Four- to 11-year-old children were interviewed about 2 different sorts of memories in the same home visit: recent memories of highly salient and stressful events--namely, injuries serious enough to require hospital emergency room treatment--and their earliest memories. Injury memories were scored for amount of unique information, completeness…

  6. Large-scale network integration in the human brain tracks temporal fluctuations in memory encoding performance.

    PubMed

    Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi

    2018-06-18

    Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.

  7. White Adipose Tissue Is a Reservoir for Memory T Cells and Promotes Protective Memory Responses to Infection.

    PubMed

    Han, Seong-Ji; Glatman Zaretsky, Arielle; Andrade-Oliveira, Vinicius; Collins, Nicholas; Dzutsev, Amiran; Shaik, Jahangheer; Morais da Fonseca, Denise; Harrison, Oliver J; Tamoutounour, Samira; Byrd, Allyson L; Smelkinson, Margery; Bouladoux, Nicolas; Bliska, James B; Brenchley, Jason M; Brodsky, Igor E; Belkaid, Yasmine

    2017-12-19

    White adipose tissue bridges body organs and plays a fundamental role in host metabolism. To what extent adipose tissue also contributes to immune surveillance and long-term protective defense remains largely unknown. Here, we have shown that at steady state, white adipose tissue contained abundant memory lymphocyte populations. After infection, white adipose tissue accumulated large numbers of pathogen-specific memory T cells, including tissue-resident cells. Memory T cells in white adipose tissue expressed a distinct metabolic profile, and white adipose tissue from previously infected mice was sufficient to protect uninfected mice from lethal pathogen challenge. Induction of recall responses within white adipose tissue was associated with the collapse of lipid metabolism in favor of antimicrobial responses. Our results suggest that white adipose tissue represents a memory T cell reservoir that provides potent and rapid effector memory responses, positioning this compartment as a potential major contributor to immunological memory. Published by Elsevier Inc.

  8. Genetic Dissociation of Acquisition and Memory Strength in the Heat-Box Spatial Learning Paradigm in "Drosophila"

    ERIC Educational Resources Information Center

    Diegelmann, Soeren; Zars, Melissa; Zars, Troy

    2006-01-01

    Memories can have different strengths, largely dependent on the intensity of reinforcers encountered. The relationship between reinforcement and memory strength is evident in asymptotic memory curves, with the level of the asymptote related to the intensity of the reinforcer. Although this is likely a fundamental property of memory formation,…

  9. NMDA Receptors Are Not Required for Pattern Completion During Associative Memory Recall

    PubMed Central

    Gu, Yiran; Cui, Zhenzhong; Tsien, Joe Z.

    2011-01-01

    Pattern completion, the ability to retrieve complete memories initiated by subsets of external cues, has been a major focus of many computation models. A previously study reports that such pattern completion requires NMDA receptors in the hippocampus. However, such a claim was derived from a non-inducible gene knockout experiment in which the NMDA receptors were absent throughout all stages of memory processes as well as animal's adult life. This raises the critical question regarding whether the previously described results were truly resulting from the requirement of the NMDA receptors in retrieval. Here, we have examined the role of the NMDA receptors in pattern completion via inducible knockout of NMDA receptors limited to the memory retrieval stage. By using two independent mouse lines, we found that inducible knockout mice, lacking NMDA receptor in either forebrain or hippocampus CA1 region at the time of memory retrieval, exhibited normal recall of associative spatial reference memory regardless of whether retrievals took place under full-cue or partial-cue conditions. Moreover, systemic antagonism of NMDA receptor during retention tests also had no effect on full-cue or partial-cue recall of spatial water maze memories. Thus, both genetic and pharmacological experiments collectively demonstrate that pattern completion during spatial associative memory recall does not require the NMDA receptor in the hippocampus or forebrain. PMID:21559402

  10. High speed optical object recognition processor with massive holographic memory

    NASA Technical Reports Server (NTRS)

    Chao, T.; Zhou, H.; Reyes, G.

    2002-01-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  11. Using data tagging to improve the performance of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1988-01-01

    The standard formulation of Kanerva's sparse distributed memory (SDM) involves the selection of a large number of data storage locations, followed by averaging the data contained in those locations to reconstruct the stored data. A variant of this model is discussed, in which the predominant pattern is the focus of reconstruction. First, one architecture is proposed which returns the predominant pattern rather than the average pattern. However, this model will require too much storage for most uses. Next, a hybrid model is proposed, called tagged SDM, which approximates the results of the predominant pattern machine, but is nearly as efficient as Kanerva's original formulation. Finally, some experimental results are shown which confirm that significant improvements in the recall capability of SDM can be achieved using the tagged architecture.

  12. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  13. Primary auditory cortex regulates threat memory specificity.

    PubMed

    Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.

  14. Influence of memory effect on the state-of-charge estimation of large-format Li-ion batteries based on LiFePO4 cathode

    NASA Astrophysics Data System (ADS)

    Shi, Wei; Wang, Jiulin; Zheng, Jianming; Jiang, Jiuchun; Viswanathan, Vilayanur; Zhang, Ji-Guang

    2016-04-01

    In this work, we systematically investigated the influence of the memory effect of LiFePO4 cathodes in large-format full batteries. The electrochemical performance of the electrodes used in these batteries was also investigated separately in half-cells to reveal their intrinsic properties. We noticed that the memory effect of LiFePO4/graphite cells depends not only on the maximum state of charge reached during the memory writing process, but is also affected by the depth of discharge reached during the memory writing process. In addition, the voltage deviation in a LiFePO4/graphite full battery is more complex than in a LiFePO4/Li half-cell, especially for a large-format battery, which exhibits a significant current variation in the region near its terminals. Therefore, the memory effect should be taken into account in advanced battery management systems to further extend the long-term cycling stabilities of Li-ion batteries using LiFePO4 cathodes.

  15. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    NASA Astrophysics Data System (ADS)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  16. Process Performance of Optima XEx Single Wafer High Energy Implanter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J. H.; Yoon, Jongyoon; Kondratenko, S.

    2011-01-07

    To meet the process requirements for well formation in future CMOS memory production, high energy implanters require more robust angle, dose, and energy control while maintaining high productivity. The Optima XEx high energy implanter meets these requirements by integrating a traditional LINAC beamline with a robust single wafer handling system. To achieve beam angle control, Optima XEx can control both the horizontal and vertical beam angles to within 0.1 degrees using advanced beam angle measurement and correction. Accurate energy calibration and energy trim functions accelerate process matching by eliminating energy calibration errors. The large volume process chamber and UDC (upstreammore » dose control) using faraday cups outside of the process chamber precisely control implant dose regardless of any chamber pressure increase due to PR (photoresist) outgassing. An optimized RF LINAC accelerator improves reliability and enables singly charged phosphorus and boron energies up to 1200 keV and 1500 keV respectively with higher beam currents. A new single wafer endstation combined with increased beam performance leads to overall increased productivity. We report on the advanced performance of Optima XEx observed during tool installation and volume production at an advanced memory fab.« less

  17. Binding, relational memory, and recall of naturalistic events: a developmental perspective.

    PubMed

    Sluzenski, Julia; Newcombe, Nora S; Kovacs, Stacie L

    2006-01-01

    This research was an investigation of children's performance on a task that requires memory binding. In Experiments 1 and 2, 4-year-olds, 6-year-olds, and adults viewed complex pictures and were tested on memory for isolated parts in the pictures and on the part combinations (combination condition). The results suggested improvement in memory for the combinations between the ages of 4 and 6 years but not in memory for the isolated parts. In Experiments 2 and 3, the authors also examined the developmental relationship between performance in the combination condition and free recall of a naturalistic event, finding preliminary evidence that performance on a memory task that requires binding is positively related to performance in episodic memory. ((c) 2006 APA, all rights reserved).

  18. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    PubMed Central

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  19. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    PubMed

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  20. Mapping of the Underlying Neural Mechanisms of Maintenance and Manipulation in Visuo-Spatial Working Memory Using An n-back Mental Rotation Task: A Functional Magnetic Resonance Imaging Study.

    PubMed

    Lamp, Gemma; Alexander, Bonnie; Laycock, Robin; Crewther, David P; Crewther, Sheila G

    2016-01-01

    Mapping of the underlying neural mechanisms of visuo-spatial working memory (WM) has been shown to consistently elicit activity in right hemisphere dominant fronto-parietal networks. However to date, the bulk of neuroimaging literature has focused largely on the maintenance aspect of visuo-spatial WM, with a scarcity of research into the aspects of WM involving manipulation of information. Thus, this study aimed to compare maintenance-only with maintenance and manipulation of visuo-spatial stimuli (3D cube shapes) utilizing a 1-back task while functional magnetic resonance imaging (fMRI) scans were acquired. Sixteen healthy participants (9 women, M = 23.94 years, SD = 2.49) were required to perform the 1-back task with or without mentally rotating the shapes 90° on a vertical axis. When no rotation was required (maintenance-only condition), a right hemispheric lateralization was revealed across fronto-parietal areas. However, when the task involved maintaining and manipulating the same stimuli through 90° rotation, activation was primarily seen in the bilateral parietal lobe and left fusiform gyrus. The findings confirm that the well-established right lateralized fronto-parietal networks are likely to underlie simple maintenance of visuo-spatial stimuli. The results also suggest that the added demand of manipulation of information maintained online appears to require further neural recruitment of functionally related areas. In particular mental rotation of visuospatial stimuli required bilateral parietal areas, and the left fusiform gyrus potentially to maintain a categorical or object representation. It can be concluded that WM is a complex neural process involving the interaction of an increasingly large network.

  1. Mapping of the Underlying Neural Mechanisms of Maintenance and Manipulation in Visuo-Spatial Working Memory Using An n-back Mental Rotation Task: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Lamp, Gemma; Alexander, Bonnie; Laycock, Robin; Crewther, David P.; Crewther, Sheila G.

    2016-01-01

    Mapping of the underlying neural mechanisms of visuo-spatial working memory (WM) has been shown to consistently elicit activity in right hemisphere dominant fronto-parietal networks. However to date, the bulk of neuroimaging literature has focused largely on the maintenance aspect of visuo-spatial WM, with a scarcity of research into the aspects of WM involving manipulation of information. Thus, this study aimed to compare maintenance-only with maintenance and manipulation of visuo-spatial stimuli (3D cube shapes) utilizing a 1-back task while functional magnetic resonance imaging (fMRI) scans were acquired. Sixteen healthy participants (9 women, M = 23.94 years, SD = 2.49) were required to perform the 1-back task with or without mentally rotating the shapes 90° on a vertical axis. When no rotation was required (maintenance-only condition), a right hemispheric lateralization was revealed across fronto-parietal areas. However, when the task involved maintaining and manipulating the same stimuli through 90° rotation, activation was primarily seen in the bilateral parietal lobe and left fusiform gyrus. The findings confirm that the well-established right lateralized fronto-parietal networks are likely to underlie simple maintenance of visuo-spatial stimuli. The results also suggest that the added demand of manipulation of information maintained online appears to require further neural recruitment of functionally related areas. In particular mental rotation of visuospatial stimuli required bilateral parietal areas, and the left fusiform gyrus potentially to maintain a categorical or object representation. It can be concluded that WM is a complex neural process involving the interaction of an increasingly large network. PMID:27199694

  2. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  3. The Dynamics of Memory: Context-Dependent Updating

    ERIC Educational Resources Information Center

    Hupbach, Almut; Hardt, Oliver; Gomez, Rebecca; Nadel, Lynn

    2008-01-01

    Understanding the dynamics of memory change is one of the current challenges facing cognitive neuroscience. Recent animal work on memory reconsolidation shows that memories can be altered long after acquisition. When reactivated, memories can be modified and require a restabilization (reconsolidation) process. We recently extended this finding to…

  4. Mechanisms of Translation Control Underlying Long-lasting Synaptic Plasticity and the Consolidation of Long-term Memory

    PubMed Central

    Santini, Emanuela; Huynh, Thu N.; Klann, Eric

    2018-01-01

    The complexity of memory formation and its persistence is a phenomenon that has been studied intensely for centuries. Memory exists in many forms and is stored in various brain regions. Generally speaking, memories are reorganized into broadly distributed cortical networks over time through systems level consolidation. At the cellular level, storage of information is believed to initially occur via altered synaptic strength by processes such as long-term potentiation (LTP). New protein synthesis is required for long-lasting synaptic plasticity as well as for the formation of long-term memory. The mammalian target of rapamycin complex 1 (mTORC1) is a critical regulator of cap-dependent protein synthesis and is required for numerous forms of long-lasting synaptic plasticity and long-term memory. As such, the study of mTORC1 and protein factors that control translation initiation and elongation have enhanced our understanding of how the process of protein synthesis is regulated during memory formation. Herein we will discuss the molecular mechanisms that regulate protein synthesis as well as pharmacological and genetic manipulations that demonstrate the requirement for proper translational control in long-lasting synaptic plasticity and long-term memory formation. PMID:24484700

  5. Activation of the Transcription Factor NF-[Kappa]B by Retrieval Is Required for Long-Term Memory Reconsolidation

    ERIC Educational Resources Information Center

    Maldonado, Hector; Romano, Arturo; Merlo, Emiliano; Freudenthal, Ramiro

    2005-01-01

    Several studies support that stored memories undergo a new period of consolidation after retrieval. It is not known whether this process, termed reconsolidation, requires the same transcriptional mechanisms involved in consolidation. Increasing evidence supports the participation of the transcription factor NF-[Kappa]B in memory. This was…

  6. Two Waves of Transcription Are Required for Long-Term Memory in the Honeybee

    ERIC Educational Resources Information Center

    Lefer, Damien; Perisse, Emmanuel; Hourcade, Benoit; Sandoz, JeanChristophe; Devaud, Jean-Marc

    2013-01-01

    Storage of information into long-term memory (LTM) usually requires at least two waves of transcription in many species. However, there is no clear evidence of this phenomenon in insects, which are influential models for memory studies. We measured retention in honeybees after injecting a transcription inhibitor at different times before and after…

  7. Myosin II Motor Activity in the Lateral Amygdala Is Required for Fear Memory Consolidation

    ERIC Educational Resources Information Center

    Gavin, Cristin F.; Rubio, Maria D.; Young, Erica; Miller, Courtney; Rumbaugh, Gavin

    2012-01-01

    Learning induces dynamic changes to the actin cytoskeleton that are required to support memory formation. However, the molecular mechanisms that mediate filamentous actin (F-actin) dynamics during learning and memory are poorly understood. Myosin II motors are highly expressed in actin-rich growth structures including dendritic spines, and we have…

  8. Practicing What Is Preached: Self-Reflections on Memory in a Memory Course

    ERIC Educational Resources Information Center

    Conrad, Nicole J.

    2013-01-01

    To apply several principles of memory covered in a first-year university memory course, I developed a series of one-page self-reflection papers on memory that require students to engage with the material in a meaningful way. These short papers cover topics related to memory, and the assignment itself applies these same principles, reinforcing…

  9. p300/CBP Histone Acetyltransferase Activity Is Required for Newly Acquired and Reactivated Fear Memories in the Lateral Amygdala

    ERIC Educational Resources Information Center

    Maddox, Stephanie A.; Watts, Casey S.; Schafe, Glenn E.

    2013-01-01

    Modifications in chromatin structure have been widely implicated in memory and cognition, most notably using hippocampal-dependent memory paradigms including object recognition, spatial memory, and contextual fear memory. Relatively little is known, however, about the role of chromatin-modifying enzymes in amygdala-dependent memory formation.…

  10. Cross-linguistic and cross-cultural effects on verbal working memory and vocabulary: testing language-minority children with an immigrant background.

    PubMed

    de Abreu, Pascale M J Engel; Baldassi, Martine; Puglisi, Marina L; Befi-Lopes, Debora M

    2013-04-01

    In this study, the authors explored the impact of test language and cultural status on vocabulary and working memory performance in multilingual language-minority children. Twenty 7-year-old Portuguese-speaking immigrant children living in Luxembourg completed several assessments of first (L1)- and second-language (L2) vocabulary (comprehension and production), executive-loaded working memory (counting recall and backward digit recall), and verbal short-term memory (digit recall and nonword repetition). Cross-linguistic task performance was compared within individuals. The language-minority children were also compared with multilingual language-majority children from Luxembourg and Portuguese-speaking monolinguals from Brazil without an immigrant background matched on age, sex, socioeconomic status, and nonverbal reasoning. Results showed that (a) verbal working memory measures involving numerical memoranda were relatively independent of test language and cultural status; (b) language status had an impact on the repetition of high- but not on low-wordlike L2 nonwords; (c) large cross-linguistic and cross-cultural effects emerged for productive vocabulary; (d) cross-cultural effects were less pronounced for vocabulary comprehension with no differences between groups if only L1 words relevant to the home context were considered. The study indicates that linguistic and cognitive assessments for language-minority children require careful choice among measures to ensure valid results. Implications for testing culturally and linguistically diverse children are discussed.

  11. Brain systems underlying attentional control and emotional distraction during working memory encoding.

    PubMed

    Ziaei, Maryam; Peira, Nathalie; Persson, Jonas

    2014-02-15

    Goal-directed behavior requires that cognitive operations can be protected from emotional distraction induced by task-irrelevant emotional stimuli. The brain processes involved in attending to relevant information while filtering out irrelevant information are still largely unknown. To investigate the neural and behavioral underpinnings of attending to task-relevant emotional stimuli while ignoring irrelevant stimuli, we used fMRI to assess brain responses during attentional instructed encoding within an emotional working memory (WM) paradigm. We showed that instructed attention to emotion during WM encoding resulted in enhanced performance, by means of increased memory performance and reduced reaction time, compared to passive viewing. A similar performance benefit was also demonstrated for recognition memory performance, although for positive pictures only. Functional MRI data revealed a network of regions involved in directed attention to emotional information for both positive and negative pictures that included medial and lateral prefrontal cortices, fusiform gyrus, insula, the parahippocampal gyrus, and the amygdala. Moreover, we demonstrate that regions in the striatum, and regions associated with the default-mode network were differentially activated for emotional distraction compared to neutral distraction. Activation in a sub-set of these regions was related to individual differences in WM and recognition memory performance, thus likely contributing to performing the task at an optimal level. The present results provide initial insights into the behavioral and neural consequences of instructed attention and emotional distraction during WM encoding. © 2013.

  12. Updating schematic emotional facial expressions in working memory: Response bias and sensitivity.

    PubMed

    Tamm, Gerly; Kreegipuu, Kairi; Harro, Jaanus; Cowan, Nelson

    2017-01-01

    It is unclear if positive, negative, or neutral emotional expressions have an advantage in short-term recognition. Moreover, it is unclear from previous studies of working memory for emotional faces whether effects of emotions comprise response bias or sensitivity. The aim of this study was to compare how schematic emotional expressions (sad, angry, scheming, happy, and neutral) are discriminated and recognized in an updating task (2-back recognition) in a representative sample of birth cohort of young adults. Schematic facial expressions allow control of identity processing, which is separate from expression processing, and have been used extensively in attention research but not much, until now, in working memory research. We found that expressions with a U-curved mouth (i.e., upwardly curved), namely happy and scheming expressions, favoured a bias towards recognition (i.e., towards indicating that the probe and the stimulus in working memory are the same). Other effects of emotional expression were considerably smaller (1-2% of the variance explained)) compared to a large proportion of variance that was explained by the physical similarity of items being compared. We suggest that the nature of the stimuli plays a role in this. The present application of signal detection methodology with emotional, schematic faces in a working memory procedure requiring fast comparisons helps to resolve important contradictions that have emerged in the emotional perception literature. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. NFκB–Pim-1–Eomesodermin axis is critical for maintaining CD8 T-cell memory quality

    PubMed Central

    Knudson, Karin M.; Saxena, Vikas; Altman, Amnon; Daniels, Mark A.; Teixeiro, Emma

    2017-01-01

    T-cell memory is critical for long-term immunity. However, the factors involved in maintaining the persistence, function, and phenotype of the memory pool are undefined. Eomesodermin (Eomes) is required for the establishment of the memory pool. Here, we show that in T cells transitioning to memory, the expression of high levels of Eomes is not constitutive but rather requires a continuum of cell-intrinsic NFκB signaling. Failure to maintain NFκB signals after the peak of the response led to impaired Eomes expression and a defect in the maintenance of CD8 T-cell memory. Strikingly, we found that antigen receptor [T-cell receptor (TCR)] signaling regulates this process through expression of the NFκB-dependent kinase proviral integration site for Moloney murine leukemia virus-1 (PIM-1), which in turn regulates NFκB and Eomes. T cells defective in TCR-dependent NFκB signaling were impaired in late expression of Pim-1, Eomes, and CD8 memory. These defects were rescued when TCR-dependent NFκB signaling was restored. We also found that NFκB–Pim-1 signals were required at memory to maintain memory CD8 T-cell longevity, effector function, and Eomes expression. Hence, an NFκB–Pim-1–Eomes axis regulates Eomes levels to maintain memory fitness. PMID:28193872

  14. [Anterograde declarative memory and its models].

    PubMed

    Barbeau, E-J; Puel, M; Pariente, J

    2010-01-01

    Patient H.M.'s recent death provides the opportunity to highlight the importance of his contribution to a better understanding of the anterograde amnesic syndrome. The thorough study of this patient over five decades largely contributed to shape the unitary model of declarative memory. This model holds that declarative memory is a single system that cannot be fractionated into subcomponents. As a system, it depends mainly on medial temporal lobes structures. The objective of this review is to present the main characteristics of different modular models that have been proposed as alternatives to the unitary model. It is also an opportunity to present different patients, who, although less famous than H.M., helped make signification contribution to the field of memory. The characteristics of the five main modular models are presented, including the most recent one (the perceptual-mnemonic model). The differences as well as how these models converge are highlighted. Different possibilities that could help reconcile unitary and modular approaches are considered. Although modular models differ significantly in many aspects, all converge to the notion that memory for single items and semantic memory could be dissociated from memory for complex material and context-rich episodes. In addition, these models converge concerning the involvement of critical brain structures for these stages: Item and semantic memory, as well as familiarity, are thought to largely depend on anterior subhippocampal areas, while relational, context-rich memory and recollective experiences are thought to largely depend on the hippocampal formation. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  15. Feature bindings are maintained in visual short-term memory without sustained focused attention.

    PubMed

    Delvenne, Jean-François; Cleeremans, Axel; Laloyaux, Cédric

    2010-01-01

    Does the maintenance of feature bindings in visual short-term memory (VSTM) require sustained focused attention? This issue was investigated in three experiments, in which memory for single features (i.e., colors or shapes) was compared with memory for feature bindings (i.e., the link between the color and shape of an object). Attention was manipulated during the memory retention interval with a retro-cue, which allows attention to be directed and focused on a subset of memory items. The retro-cue was presented 700 ms after the offset of the memory display and 700 ms before the onset of the test display. If the maintenance of feature bindings - but not of individual features - in memory requires sustained focused attention, the retro-cue should not affect memory performance. Contrary to this prediction, we found that both memory for feature bindings and memory for individual features were equally improved by the retro-cue. Therefore, this finding does not support the view that the sustained focused attention is needed to properly maintain feature bindings in VSTM.

  16. The Development of Time-Based Prospective Memory in Childhood: The Role of Working Memory Updating

    ERIC Educational Resources Information Center

    Voigt, Babett; Mahy, Caitlin E. V.; Ellis, Judi; Schnitzspahn, Katharina; Krause, Ivonne; Altgassen, Mareike; Kliegel, Matthias

    2014-01-01

    This large-scale study examined the development of time-based prospective memory (PM) across childhood and the roles that working memory updating and time monitoring play in driving age effects in PM performance. One hundred and ninety-seven children aged 5 to 14 years completed a time-based PM task where working memory updating load was…

  17. Synapsin Is Selectively Required for Anesthesia-Sensitive Memory

    ERIC Educational Resources Information Center

    Knapek, Stephan; Gerber, Bertram; Tanimoto, Hiromu

    2010-01-01

    Odor-shock memory in "Drosophila melanogaster" consists of heterogeneous components each with different dynamics. We report that a null mutant for the evolutionarily conserved synaptic protein Synapsin entails a memory deficit selectively in early memory, leaving later memory as well as sensory motor function unaffected. Notably, a consolidated…

  18. Conceptual design and feasibility evaluation model of a 10 to the 8th power bit oligatomic mass memory. Volume 2: Feasibility evaluation model

    NASA Technical Reports Server (NTRS)

    Horst, R. L.; Nordstrom, M. J.

    1972-01-01

    The partially populated oligatomic mass memory feasibility model is described and evaluated. A system was desired to verify the feasibility of the oligatomic (mirror) memory approach as applicable to large scale solid state mass memories.

  19. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-10-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.

  20. Balanced Branching in Transcription Termination

    NASA Technical Reports Server (NTRS)

    Harrington, K. J.; Laughlin, R. B.; Liang, S.

    2001-01-01

    The theory of stochastic transcription termination based on free-energy competition requires two or more reaction rates to be delicately balanced over a wide range of physical conditions. A large body of work on glasses and large molecules suggests that this should be impossible in such a large system in the absence of a new organizing principle of matter. We review the experimental literature of termination and find no evidence for such a principle but many troubling inconsistencies, most notably anomalous memory effects. These suggest that termination has a deterministic component and may conceivably be not stochastic at all. We find that a key experiment by Wilson and von Hippel allegedly refuting deterministic termination was an incorrectly analyzed regulatory effect of Mg(2+) binding.

  1. The Neuroscience of Memory: Implications for the Courtroom

    PubMed Central

    2014-01-01

    Although memory can be hazy at times, it is often assumed that memories of violent or otherwise stressful events are so well-encoded that they are largely indelible and that confidently retrieved memories are likely to be accurate. However, findings from basic psychological research and neuroscience studies indicate that memory is a reconstructive process that is susceptible to distortion. In the courtroom, even minor memory distortions can have severe consequences that are in part driven by common misunderstandings about memory, e.g. expecting memory to be more veridical than it may actually be. PMID:23942467

  2. Molecular Mechanisms in Perirhinal Cortex Selectively Necessary for Discrimination of Overlapping Memories, but Independent of Memory Persistence

    PubMed Central

    Miranda, Magdalena; Kent, Brianne A.; Weisstaub, Noelia V.

    2017-01-01

    Abstract Successful memory involves not only remembering over time but also keeping memories distinct. The ability to separate similar experiences into distinct memories is a main feature of episodic memory. Discrimination of overlapping representations has been investigated in the dentate gyrus of the hippocampus (DG), but little is known about this process in other regions such as the perirhinal cortex (Prh). We found in male rats that perirhinal brain-derived neurotrophic factor (BDNF) is required for separable storage of overlapping, but not distinct, object representations, which is identical to its role in the DG for spatial representations. Also, activity-regulated cytoskeletal-associated protein (Arc) is required for disambiguation of object memories, as measured by infusion of antisense oligonucleotides. This is the first time Arc has been implicated in the discrimination of objects with overlapping features. Although molecular mechanisms for object memory have been shown previously in Prh, these have been dependent on delay, suggesting a role specifically in memory duration. BDNF and Arc involvement were independent of delay—the same demand for memory persistence was present in all conditions—but only when discrimination of similar objects was required were these mechanisms recruited and necessary. Finally, we show that BDNF and Arc participate in the same pathway during consolidation of overlapping object memories. We provide novel evidence regarding the proteins involved in disambiguation of object memories outside the DG and suggest that, despite the anatomical differences, similar mechanisms underlie this process in the DG and Prh that are engaged depending on the similarity of the stimuli. PMID:29085903

  3. Intact implicit verbal relational memory in medial temporal lobe amnesia

    PubMed Central

    Verfaelllie, Mieke; LaRocque, Karen F.; Keane, Margaret M.

    2012-01-01

    To elucidate the role of the hippocampus in unaware relational memory, the present study examined the performance of amnesic patients with medial temporal lobe (MTL) lesions on a cued category-exemplar generation task. In contrast to a prior study in which amnesic patients showed impaired performance (Verfaellie et al., Cognitive, Affective, and Behavioral Neuroscience, 2006, 6, 91–101), the current study employed a task that required active processing of the context word at test. In this version of the task, amnesic patients, like control participants, showed enhanced category exemplar priming when the context word associated with the target at study was reinstated at test. The finding of intact implicit memory for novel associations following hippocampal lesions in a task that requires flexible use of retrieval cues is inconsistent with a relational memory view that suggests that the hippocampus is critical for all forms of relational memory, regardless of awareness. Instead, it suggests that unaware memory for within-domain associations does not require MTL mediation. PMID:22609574

  4. A Simulation System Based on the Actor Paradigm

    DTIC Science & Technology

    1988-02-01

    of the protocol. Shared memory communication requires the programmer to wait and signal semaphores explicitly to synchronize the communicating parties...wide range of possibilities within the same basic protocol. - The simplicity of the primitive operation set affords those creating new operations...more flexibility (Ada has a large and complicated primitive set). -3- II I I I B A I -I I I I I . 0 1 2 3 4 5 Time 0: Both processes A and B are

  5. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  6. A model for memory systems based on processing modes rather than consciousness.

    PubMed

    Henke, Katharina

    2010-07-01

    Prominent models of human long-term memory distinguish between memory systems on the basis of whether learning and retrieval occur consciously or unconsciously. Episodic memory formation requires the rapid encoding of associations between different aspects of an event which, according to these models, depends on the hippocampus and on consciousness. However, recent evidence indicates that the hippocampus mediates rapid associative learning with and without consciousness in humans and animals, for long-term and short-term retention. Consciousness seems to be a poor criterion for differentiating between declarative (or explicit) and non declarative (or implicit) types of memory. A new model is therefore required in which memory systems are distinguished based on the processing operations involved rather than by consciousness.

  7. Digitally controlled twelve-pulse firing generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berde, D.; Ferrara, A.A.

    1981-01-01

    Control System Studies for the Tokamak Fusion Test Reactor (TFTR) indicate that accurate thyristor firing in the AC-to-DC conversion system is required in order to achieve good regulation of the various field currents. Rapid update and exact firing angle control are required to avoid instabilities, large eddy currents, or parasitic oscillations. The Prototype Firing Generator was designed to satisfy these requirements. To achieve the required /plus or minus/0.77/degree/firing accuracy, a three-phase-locked loop reference was designed; otherwise, the Firing Generator employs digital circuitry. The unit, housed in a standard CAMAC crate, operates under microcomputer control. Functions are performed under program control,more » which resides in nonvolatile read-only memory. Communication with CICADA control system is provided via an 11-bit parallel interface.« less

  8. Nanocubes for real-time exploration of spatiotemporal datasets.

    PubMed

    Lins, Lauro; Klosowski, James T; Scheidegger, Carlos

    2013-12-01

    Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop's main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.

  9. Evaluation of the eigenvalue method in the solution of transient heat conduction problems

    NASA Astrophysics Data System (ADS)

    Landry, D. W.

    1985-01-01

    The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.

  10. Working Memory and Aging: Separating the Effects of Content and Context

    PubMed Central

    Bopp, Kara L.; Verhaeghen, Paul

    2009-01-01

    In three experiments, we investigated the hypothesis that age-related differences in working memory might be due to the inability to bind content with context. Participants were required to find a repeating stimulus within a single series (no context memory required) or within multiple series (necessitating memory for context). Response time and accuracy were examined in two task domains: verbal and visuospatial. Binding content with context led to longer processing time and poorer accuracy in both age groups, even when working memory load was held constant. Although older adults were overall slower and less accurate than younger adults, the need for context memory did not differentially affect their performance. It is therefore unlikely that age differences in working memory are due to specific age-related problems with content-with-context binding. PMID:20025410

  11. The visual orientation memory of Drosophila requires Foraging (PKG) upstream of Ignorant (RSK2) in ring neurons of the central complex

    PubMed Central

    Kuntz, Sara; Poeck, Burkhard; Sokolowski, Marla B.; Strauss, Roland

    2012-01-01

    Orientation and navigation in a complex environment requires path planning and recall to exert goal-driven behavior. Walking Drosophila flies possess a visual orientation memory for attractive targets which is localized in the central complex of the adult brain. Here we show that this type of working memory requires the cGMP-dependent protein kinase encoded by the foraging gene in just one type of ellipsoid-body ring neurons. Moreover, genetic and epistatic interaction studies provide evidence that Foraging functions upstream of the Ignorant Ribosomal-S6 Kinase 2, thus revealing a novel neuronal signaling pathway necessary for this type of memory in Drosophila. PMID:22815538

  12. Memory Enhancement Induced by Post-Training Intrabasolateral Amygdala Infusions of [beta]-Adrenergic or Muscarinic Agonists Requires Activation of Dopamine Receptors: Involvement of Right, but Not Left, Basolateral Amygdala

    ERIC Educational Resources Information Center

    LaLumiere, Ryan T.; McGaugh, James L.

    2005-01-01

    Previous findings indicate that the noradrenergic, dopaminergic, and cholinergic innervations of the basolateral amygdala (BLA) modulate memory consolidation. The current study investigated whether memory enhancement induced by post-training intra-BLA infusions of a [beta]-adrenergic or muscarinic cholinergic agonist requires concurrent activation…

  13. Designing Next Generation Massively Multithreaded Architectures for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Secchi, Simone; Villa, Oreste

    Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory referencemore » aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.« less

  14. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  15. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  16. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    NASA Astrophysics Data System (ADS)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  17. NMDA receptor- and ERK-dependent histone methylation changes in the lateral amygdala bidirectionally regulate fear memory formation.

    PubMed

    Gupta-Agarwal, Swati; Jarome, Timothy J; Fernandez, Jordan; Lubin, Farah D

    2014-07-01

    It is well established that fear memory formation requires de novo gene transcription in the amygdala. We provide evidence that epigenetic mechanisms in the form of histone lysine methylation in the lateral amygdala (LA) are regulated by NMDA receptor (NMDAR) signaling and involved in gene transcription changes necessary for fear memory consolidation. Here we found increases in histone H3 lysine 9 dimethylation (H3K9me2) levels in the LA at 1 h following auditory fear conditioning, which continued to be temporally regulated up to 25 h following behavioral training. Additionally, we demonstrate that inhibiting the H3K9me2 histone lysine methyltransferase G9a (H/KMTs-G9a) in the LA impaired fear memory, while blocking the H3K9me2 histone lysine demethylase LSD1 (H/KDM-LSD1) enhanced fear memory, suggesting that H3K9me2 in the LA can bidirectionally regulate fear memory formation. Furthermore, we show that NMDAR activity differentially regulated the recruitment of H/KMT-G9a, H/KDM-LSD1, and subsequent H3K9me2 levels at a target gene promoter. This was largely regulated by GluN2B- but not GluN2A-containing NMDARs via ERK activation. Moreover, fear memory deficits associated with NMDAR or ERK blockade were successfully rescued through pharmacologically inhibiting LSD1, suggesting that enhancements of H3K9me2 levels within the LA can rescue fear memory impairments that result from hypofunctioning NMDARs or loss of ERK signaling. Together, the present study suggests that histone lysine methylation regulation in the LA via NMDAR-ERK-dependent signaling is involved in fear memory formation. © 2014 Gupta-Agarwal et al.; Published by Cold Spring Harbor Laboratory Press.

  18. Autoreactive Memory CD4+ T Lymphocytes that mediate Chronic Uveitis Reside in the Bone Marrow through STAT3-dependent Mechanisms

    PubMed Central

    Oh, Hyun-Mee; Yu, Cheng-Rong; Lee, YongJun; Chan, Chi-Chao; Maminishkis, Arvydas; Egwuagu, Charles E.

    2011-01-01

    Organ-specific autoimmune diseases are usually characterized by repeated cycles of remission and recurrent inflammation. However, where the autoreactive memory T-cells reside in-between episodes of recurrent inflammation is largely unknown. In this study, we have established a mouse model of chronic uveitis characterized by progressive photoreceptor-cell loss, retinal-degeneration, focal retinitis, retinal vasculitis, multifocal-choroiditis and choroidal neovascularization, providing for the first time a useful model for studying long-term pathological consequences of chronic inflammation of the neuroretina. We show that several months after inception of acute uveitis that autoreactive memory T-cells specific to retinal autoantigen, IRBP, relocated to bone marrow (BM). The IRBP-specific memory T-cells (IL-7RαHiLy6CHiCD4+) resided in BM in resting state but upon re-stimulation converted to IL-17-/IFN-γ-expressing effectors (IL-7RαLowLy6CLowCD4+) that mediated uveitis. We further show that T-cells from STAT3-deficient (CD4-STAT3KO) mice are defective in α4β1 and osteopontin expression; defects that correlated with inability of IRBP-specific memory CD4-STAT3KO T-cells to traffic into BM. We adoptively transferred uveitis to naïve mice using BM cells from WT mice with chronic uveitis but not BM cells from CD4-STAT3KO, providing direct evidence that memory T-cells that mediate uveitis reside in BM and that STAT3-dependent mechanism may be required for migration into and retention of memory T-cells in BM. Identifying BM as survival-niche for T-cells that cause uveitis, suggests that BM stromal cells that provide survival signals to autoreactive memory T-cells and STAT3-dependent mechanisms that mediate their relocation into BM, are attractive therapeutic targets that can be exploited to selectively deplete memory T-cells that drive chronic inflammation. PMID:21832158

  19. Short-Term Memory Trace in Rapidly Adapting Synapses of Inferior Temporal Cortex

    PubMed Central

    Sugase-Miyamoto, Yasuko; Liu, Zheng; Wiener, Matthew C.; Optican, Lance M.; Richmond, Barry J.

    2008-01-01

    Visual short-term memory tasks depend upon both the inferior temporal cortex (ITC) and the prefrontal cortex (PFC). Activity in some neurons persists after the first (sample) stimulus is shown. This delay-period activity has been proposed as an important mechanism for working memory. In ITC neurons, intervening (nonmatching) stimuli wipe out the delay-period activity; hence, the role of ITC in memory must depend upon a different mechanism. Here, we look for a possible mechanism by contrasting memory effects in two architectonically different parts of ITC: area TE and the perirhinal cortex. We found that a large proportion (80%) of stimulus-selective neurons in area TE of macaque ITCs exhibit a memory effect during the stimulus interval. During a sequential delayed matching-to-sample task (DMS), the noise in the neuronal response to the test image was correlated with the noise in the neuronal response to the sample image. Neurons in perirhinal cortex did not show this correlation. These results led us to hypothesize that area TE contributes to short-term memory by acting as a matched filter. When the sample image appears, each TE neuron captures a static copy of its inputs by rapidly adjusting its synaptic weights to match the strength of their individual inputs. Input signals from subsequent images are multiplied by those synaptic weights, thereby computing a measure of the correlation between the past and present inputs. The total activity in area TE is sufficient to quantify the similarity between the two images. This matched filter theory provides an explanation of what is remembered, where the trace is stored, and how comparison is done across time, all without requiring delay period activity. Simulations of a matched filter model match the experimental results, suggesting that area TE neurons store a synaptic memory trace during short-term visual memory. PMID:18464917

  20. Multiprocessing MCNP on an IBN RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less

  1. Episodic memory in aspects of large-scale brain networks

    PubMed Central

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  2. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  3. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    PubMed

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  4. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  5. Ultrasound Picture Archiving And Communication Systems

    NASA Astrophysics Data System (ADS)

    Koestner, Ken; Hottinger, C. F.

    1982-01-01

    The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.

  6. Review of optical memory technologies

    NASA Technical Reports Server (NTRS)

    Chen, D.

    1972-01-01

    Optical technologies for meeting the demands of large capacity fast access time memory are discussed in terms of optical phenomena and laser applications. The magneto-optic and electro-optic approaches are considered to be the most promising memory approaches.

  7. An alternative design for a sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1989-01-01

    A new design for a Sparse Distributed Memory, called the selected-coordinate design, is described. As in the original design, there are a large number of memory locations, each of which may be activated by many different addresses (binary vectors) in a very large address space. Each memory location is defined by specifying ten selected coordinates (bit positions in the address vectors) and a set of corresponding assigned values, consisting of one bit for each selected coordinate. A memory location is activated by an address if, for all ten of the locations's selected coordinates, the corresponding bits in the address vector match the respective assigned value bits, regardless of the other bits in the address vector. Some comparative memory capacity and signal-to-noise ratio estimates for the both the new and original designs are given. A few possible hardware embodiments of the new design are described.

  8. Diagrams increase the recall of nondepicted text when understanding is also increased.

    PubMed

    Serra, Michael J

    2010-02-01

    Multimedia presentations typically produce better memory and understanding than do single-medium presentations. Little research, however, has considered the effect of multimedia on memory for nonmultimedia information within a large multimedia presentation (e.g., nondepicted text in a large text with diagrams). To this end, the present two experiments compared memory for target text information that was either depicted in diagrams or not. Participants (n = 180) studied either a text-only version of a text about lightning or a text-with-diagrams version in which half the target information was depicted in diagrams. Memory was tested with both free recall and cued recall questions. Overall, diagrams did not affect memory for the entire text; diagrams increased memory only for the information they depicted. Diagrams exerted a generalized effect on free recall only when diagrams increased the overall understanding of the text (i.e., when the participants studied the materials twice before the test).

  9. Large-Scale Fluorescence Calcium-Imaging Methods for Studies of Long-Term Memory in Behaving Mammals

    PubMed Central

    Jercog, Pablo; Rogerson, Thomas; Schnitzer, Mark J.

    2016-01-01

    During long-term memory formation, cellular and molecular processes reshape how individual neurons respond to specific patterns of synaptic input. It remains poorly understood how such changes impact information processing across networks of mammalian neurons. To observe how networks encode, store, and retrieve information, neuroscientists must track the dynamics of large ensembles of individual cells in behaving animals, over timescales commensurate with long-term memory. Fluorescence Ca2+-imaging techniques can monitor hundreds of neurons in behaving mice, opening exciting avenues for studies of learning and memory at the network level. Genetically encoded Ca2+ indicators allow neurons to be targeted by genetic type or connectivity. Chronic animal preparations permit repeated imaging of neural Ca2+ dynamics over multiple weeks. Together, these capabilities should enable unprecedented analyses of how ensemble neural codes evolve throughout memory processing and provide new insights into how memories are organized in the brain. PMID:27048190

  10. Perirhinal cortical inactivation impairs object-in-place memory and disrupts task-dependent firing in hippocampal CA1, but not in CA3.

    PubMed

    Lee, Inah; Park, Seong-Beom

    2013-01-01

    Objects and their locations can associatively define an event and a conjoint representation of object-place can form an event memory. Remembering how to respond to a certain object in a spatial context is dependent on both hippocampus and perirhinal cortex (PER). However, the relative functional contributions of the two regions are largely unknown in object-place associative memory. We investigated the PER influence on hippocampal firing in a goal-directed object-place memory task by comparing the firing patterns of CA1 and CA3 of the dorsal hippocampus between conditions of PER muscimol inactivation and vehicle control infusions. Rats were required to choose one of the two objects in a specific spatial context (regardless of the object positions in the context), which was shown to be dependent on both hippocampus and PER. Inactivation of PER with muscimol (MUS) severely disrupted performance of well-trained rats, resulting in response bias (i.e., choosing any object on a particular side). MUS did not significantly alter the baseline firing rates of hippocampal neurons. We measured the similarity in firing patterns between two trial conditions in which the same target objects were chosen on opposite sides within the same arm [object-in-place (O-P) strategy] and compared the results with the similarity in firing between two trial conditions in which the rat chose any object encountered on a particular side [response-in-place (R-P) strategy]. We found that the similarity in firing patterns for O-P trials was significantly reduced with MUS compared to control conditions (CTs). Importantly, this was largely because MUS injections affected the O-P firing patterns in CA1 neurons, but not in CA3. The results suggest that PER is critical for goal-directed organization of object-place associative memory in the hippocampus presumably by influencing how object information is associated with spatial information in CA1 according to task demand.

  11. Total recall in distributive associative memories

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1991-01-01

    Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.

  12. Content Analysis of Memory and Memory-Related Research Studies on Children with Hearing Loss

    ERIC Educational Resources Information Center

    Dogan, Murat; Hasanoglu, Gülcihan

    2016-01-01

    Memory plays a profound role in explaining language development, academic learning, and learning disabilities. Even though there is a large body of research on language development, literacy skills, other academic skills, and intellectual characteristics of children with hearing loss, there is no holistic study on their memory processes.…

  13. Level of Processing Modulates the Neural Correlates of Emotional Memory Formation

    ERIC Educational Resources Information Center

    Ritchey, Maureen; LaBar, Kevin S.; Cabeza, Roberto

    2011-01-01

    Emotion is known to influence multiple aspects of memory formation, including the initial encoding of the memory trace and its consolidation over time. However, the neural mechanisms whereby emotion impacts memory encoding remain largely unexplored. The present study used a levels-of-processing manipulation to characterize the impact of emotion on…

  14. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    PubMed

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.

  15. Staging memory for massively parallel processor

    NASA Technical Reports Server (NTRS)

    Batcher, Kenneth E. (Inventor)

    1988-01-01

    The invention herein relates to a computer organization capable of rapidly processing extremely large volumes of data. A staging memory is provided having a main stager portion consisting of a large number of memory banks which are accessed in parallel to receive, store, and transfer data words simultaneous with each other. Substager portions interconnect with the main stager portion to match input and output data formats with the data format of the main stager portion. An address generator is coded for accessing the data banks for receiving or transferring the appropriate words. Input and output permutation networks arrange the lineal order of data into and out of the memory banks.

  16. Memory Network For Distributed Data Processors

    NASA Technical Reports Server (NTRS)

    Bolen, David; Jensen, Dean; Millard, ED; Robinson, Dave; Scanlon, George

    1992-01-01

    Universal Memory Network (UMN) is modular, digital data-communication system enabling computers with differing bus architectures to share 32-bit-wide data between locations up to 3 km apart with less than one millisecond of latency. Makes it possible to design sophisticated real-time and near-real-time data-processing systems without data-transfer "bottlenecks". This enterprise network permits transmission of volume of data equivalent to an encyclopedia each second. Facilities benefiting from Universal Memory Network include telemetry stations, simulation facilities, power-plants, and large laboratories or any facility sharing very large volumes of data. Main hub of UMN is reflection center including smaller hubs called Shared Memory Interfaces.

  17. mTORC1 controls long-term memory retrieval.

    PubMed

    Pereyra, Magdalena; Katche, Cynthia; de Landeta, Ana Belén; Medina, Jorge H

    2018-06-08

    Understanding how stored information emerges is a main question in the neurobiology of memory that is now increasingly gaining attention. However, molecular events underlying this memory stage, including involvement of protein synthesis, are not well defined. Mammalian target of rapamycin complex 1 (mTORC1), a central regulator of protein synthesis, has been implicated in synaptic plasticity and is required for memory formation. Using inhibitory avoidance (IA), we evaluated the role of mTORC1 in memory retrieval. Infusion of a selective mTORC1 inhibitor, rapamycin, into the dorsal hippocampus 15 or 40 min but not 3 h before testing at 24 h reversibly disrupted memory expression even in animals that had already expressed IA memory. Emetine, a general protein synthesis inhibitor, provoked a similar impairment. mTORC1 inhibition did not interfere with short-term memory retrieval. When infused before test at 7 or 14 but not at 28 days after training, rapamycin impaired memory expression. mTORC1 blockade in retrosplenial cortex, another structure required for IA memory, also impaired memory retention. In addition, pretest intrahippocampal rapamycin infusion impaired object location memory retrieval. Our results support the idea that ongoing protein synthesis mediated by activation of mTORC1 pathway is necessary for long but not for short term memory.

  18. Multi-floor cascading ferroelectric nanostructures: multiple data writing-based multi-level non-volatile memory devices

    NASA Astrophysics Data System (ADS)

    Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon

    2016-01-01

    Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07377d

  19. Developmental Dissociation Between the Maturation of Procedural Memory and Declarative Memory

    PubMed Central

    Finn, Amy S.; Kalra, Priya B.; Goetz, Calvin; Leonard, Julia A.; Sheridan, Margaret A.; Gabrieli, John D. E.

    2015-01-01

    Declarative memory and procedural memory are known to be two fundamentally different kinds of memory that are dissociable in their psychological characteristics and measurement (explicit versus implicit) and in the neural systems that subserve each kind of memory. Declarative memory abilities are known to improve from childhood through young adulthood, but the developmental maturation of procedural memory is largely unknown. We compared 10-year-old children and young adults on measures of declarative memory, working memory capacity, and four measures of procedural memory that have been strongly dissociated from declarative memory (mirror tracing, rotary pursuit, probabilistic classification, and artificial grammar). Children had lesser declarative memory ability and lesser working memory capacity than the adults, but exhibited learning equivalent to adults on all four measures of procedural memory. Declarative and procedural memory are, therefore, developmentally dissociable, with procedural memory being adult-like by age 10 and declarative memory continuing to mature into young adulthood. PMID:26560675

  20. Distractor devaluation requires visual working memory.

    PubMed

    Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E

    2009-02-01

    Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.

  1. Experimentally modeling stochastic processes with less memory by the use of a quantum processor

    PubMed Central

    Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.

    2017-01-01

    Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218

  2. Programming Models for Concurrency and Real-Time

    NASA Astrophysics Data System (ADS)

    Vitek, Jan

    Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.

  3. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  4. Working memory capacity and the top-down control of visual search: Exploring the boundaries of "executive attention".

    PubMed

    Kane, Michael J; Poole, Bradley J; Tuholski, Stephen W; Engle, Randall W

    2006-07-01

    The executive attention theory of working memory capacity (WMC) proposes that measures of WMC broadly predict higher order cognitive abilities because they tap important and general attention capabilities (R. W. Engle & M. J. Kane, 2004). Previous research demonstrated WMC-related differences in attention tasks that required restraint of habitual responses or constraint of conscious focus. To further specify the executive attention construct, the present experiments sought boundary conditions of the WMC-attention relation. Three experiments correlated individual differences in WMC, as measured by complex span tasks, and executive control of visual search. In feature-absence search, conjunction search, and spatial configuration search, WMC was unrelated to search slopes, although they were large and reliably measured. Even in a search task designed to require the volitional movement of attention (J. M. Wolfe, G. A. Alvarez, & T. S. Horowitz, 2000), WMC was irrelevant to performance. Thus, WMC is not associated with all demanding or controlled attention processes, which poses problems for some general theories of WMC. Copyright 2006 APA, all rights reserved.

  5. Uncertainty-Dependent Extinction of Fear Memory in an Amygdala-mPFC Neural Circuit Model

    PubMed Central

    Li, Yuzhe; Nakae, Ken; Ishii, Shin; Naoki, Honda

    2016-01-01

    Uncertainty of fear conditioning is crucial for the acquisition and extinction of fear memory. Fear memory acquired through partial pairings of a conditioned stimulus (CS) and an unconditioned stimulus (US) is more resistant to extinction than that acquired through full pairings; this effect is known as the partial reinforcement extinction effect (PREE). Although the PREE has been explained by psychological theories, the neural mechanisms underlying the PREE remain largely unclear. Here, we developed a neural circuit model based on three distinct types of neurons (fear, persistent and extinction neurons) in the amygdala and medial prefrontal cortex (mPFC). In the model, the fear, persistent and extinction neurons encode predictions of net severity, of unconditioned stimulus (US) intensity, and of net safety, respectively. Our simulation successfully reproduces the PREE. We revealed that unpredictability of the US during extinction was represented by the combined responses of the three types of neurons, which are critical for the PREE. In addition, we extended the model to include amygdala subregions and the mPFC to address a recent finding that the ventral mPFC (vmPFC) is required for consolidating extinction memory but not for memory retrieval. Furthermore, model simulations led us to propose a novel procedure to enhance extinction learning through re-conditioning with a stronger US; strengthened fear memory up-regulates the extinction neuron, which, in turn, further inhibits the fear neuron during re-extinction. Thus, our models increased the understanding of the functional roles of the amygdala and vmPFC in the processing of uncertainty in fear conditioning and extinction. PMID:27617747

  6. Uncertainty-Dependent Extinction of Fear Memory in an Amygdala-mPFC Neural Circuit Model.

    PubMed

    Li, Yuzhe; Nakae, Ken; Ishii, Shin; Naoki, Honda

    2016-09-01

    Uncertainty of fear conditioning is crucial for the acquisition and extinction of fear memory. Fear memory acquired through partial pairings of a conditioned stimulus (CS) and an unconditioned stimulus (US) is more resistant to extinction than that acquired through full pairings; this effect is known as the partial reinforcement extinction effect (PREE). Although the PREE has been explained by psychological theories, the neural mechanisms underlying the PREE remain largely unclear. Here, we developed a neural circuit model based on three distinct types of neurons (fear, persistent and extinction neurons) in the amygdala and medial prefrontal cortex (mPFC). In the model, the fear, persistent and extinction neurons encode predictions of net severity, of unconditioned stimulus (US) intensity, and of net safety, respectively. Our simulation successfully reproduces the PREE. We revealed that unpredictability of the US during extinction was represented by the combined responses of the three types of neurons, which are critical for the PREE. In addition, we extended the model to include amygdala subregions and the mPFC to address a recent finding that the ventral mPFC (vmPFC) is required for consolidating extinction memory but not for memory retrieval. Furthermore, model simulations led us to propose a novel procedure to enhance extinction learning through re-conditioning with a stronger US; strengthened fear memory up-regulates the extinction neuron, which, in turn, further inhibits the fear neuron during re-extinction. Thus, our models increased the understanding of the functional roles of the amygdala and vmPFC in the processing of uncertainty in fear conditioning and extinction.

  7. Different Phases of Long-Term Memory Require Distinct Temporal Patterns of PKA Activity after Single-Trial Classical Conditioning

    ERIC Educational Resources Information Center

    Michel, Maximilian; Kemenes, Ildiko; Muller, Uli; Kemenes, Gyorgy

    2008-01-01

    The cAMP-dependent protein kinase (PKA) is known to play a critical role in both transcription-independent short-term or intermediate-term memory and transcription-dependent long-term memory (LTM). Although distinct phases of LTM already have been demonstrated in some systems, it is not known whether these phases require distinct temporal patterns…

  8. Camera memory study for large space telescope. [charge coupled devices

    NASA Technical Reports Server (NTRS)

    Hoffman, C. P.; Brewer, J. E.; Brager, E. A.; Farnsworth, D. L.

    1975-01-01

    Specifications were developed for a memory system to be used as the storage media for camera detectors on the large space telescope (LST) satellite. Detectors with limited internal storage time such as intensities charge coupled devices and silicon intensified targets are implied. The general characteristics are reported of different approaches to the memory system with comparisons made within the guidelines set forth for the LST application. Priority ordering of comparisons is on the basis of cost, reliability, power, and physical characteristics. Specific rationales are provided for the rejection of unsuitable memory technologies. A recommended technology was selected and used to establish specifications for a breadboard memory. Procurement scheduling is provided for delivery of system breadboards in 1976, prototypes in 1978, and space qualified units in 1980.

  9. Dopamine D1/D5 receptors in the dorsal hippocampus are required for the acquisition and expression of a single trial cocaine-associated memory.

    PubMed

    Kramar, Cecilia P; Barbano, M Flavia; Medina, Jorge H

    2014-12-01

    The role of the hippocampus in memory supporting associative learning between contexts and unconditioned stimuli is well documented. Hippocampal dopamine neurotransmission modulates synaptic plasticity and memory processing of fear-motivated and spatial learning tasks. Much less is known about the involvement of the hippocampus and its D1/D5 dopamine receptors in the acquisition, consolidation and expression of memories for drug-associated experiences, more particularly, in the processing of single pairing cocaine conditioned place preference (CPP) training. To determine the temporal dynamics of cocaine CPP memory formation, we trained rats in a one-pairing CPP paradigm and tested them at different time intervals after conditioning. The cocaine-associated memory lasted 24 h but not 72 h. Then, we bilaterally infused the dorsal hippocampus with the GABA A receptor agonist muscimol or the D1/D5 dopamine receptor antagonist SCH 23390 at different stages to evaluate the mechanisms involved in the acquisition, consolidation or expression of cocaine CPP memory. Blockade of D1/D5 dopamine receptors at the moment of training impaired the acquisition of cocaine CPP memories, without having any effect when administered immediately or 12 h after training. The expression of cocaine CPP memory was also affected by the administration of SCH 23390 at the moment of the test. Conversely, muscimol impaired the consolidation of cocaine CPP memory only when administered 12 h post conditioning. These findings suggests that dopaminergic inputs to the dorsal hippocampus are required for the acquisition and expression of one trial cocaine-associated memory while neural activity of this structure is required for the late consolidation of these types of memories. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Temperature induced complementary switching in titanium oxide resistive random access memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, D., E-mail: dpanda@nist.edu; Department of Electronics Engineering and Institute of Electronics, National Chiao Tung University, Hsinchu 30010, Taiwan; Simanjuntak, F. M.

    2016-07-15

    On the way towards high memory density and computer performance, a considerable development in energy efficiency represents the foremost aspiration in future information technology. Complementary resistive switch consists of two antiserial resistive switching memory (RRAM) elements and allows for the construction of large passive crossbar arrays by solving the sneak path problem in combination with a drastic reduction of the power consumption. Here we present a titanium oxide based complementary RRAM (CRRAM) device with Pt top and TiN bottom electrode. A subsequent post metal annealing at 400°C induces CRRAM. Forming voltage of 4.3 V is required for this device tomore » initiate switching process. The same device also exhibiting bipolar switching at lower compliance current, Ic <50 μA. The CRRAM device have high reliabilities. Formation of intermediate titanium oxi-nitride layer is confirmed from the cross-sectional HRTEM analysis. The origin of complementary switching mechanism have been discussed with AES, HRTEM analysis and schematic diagram. This paper provides valuable data along with analysis on the origin of CRRAM for the application in nanoscale devices.« less

  11. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  12. The Glasgow Voice Memory Test: Assessing the ability to memorize and recognize unfamiliar voices.

    PubMed

    Aglieri, Virginia; Watson, Rebecca; Pernet, Cyril; Latinus, Marianne; Garrido, Lúcia; Belin, Pascal

    2017-02-01

    One thousand one hundred and twenty subjects as well as a developmental phonagnosic subject (KH) along with age-matched controls performed the Glasgow Voice Memory Test, which assesses the ability to encode and immediately recognize, through an old/new judgment, both unfamiliar voices (delivered as vowels, making language requirements minimal) and bell sounds. The inclusion of non-vocal stimuli allows the detection of significant dissociations between the two categories (vocal vs. non-vocal stimuli). The distributions of accuracy and sensitivity scores (d') reflected a wide range of individual differences in voice recognition performance in the population. As expected, KH showed a dissociation between the recognition of voices and bell sounds, her performance being significantly poorer than matched controls for voices but not for bells. By providing normative data of a large sample and by testing a developmental phonagnosic subject, we demonstrated that the Glasgow Voice Memory Test, available online and accessible from all over the world, can be a valid screening tool (~5 min) for a preliminary detection of potential cases of phonagnosia and of "super recognizers" for voices.

  13. Conditions Database for the Belle II Experiment

    NASA Astrophysics Data System (ADS)

    Wood, L.; Elsethagen, T.; Schram, M.; Stephan, E.

    2017-10-01

    The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II conditions database was designed with a straightforward goal: make it as easily maintainable as possible. To this end, HEP-specific software tools were avoided as much as possible and industry standard tools used instead. HTTP REST services were selected as the application interface, which provide a high-level interface to users through the use of standard libraries such as curl. The application interface itself is written in Java and runs in an embedded Payara-Micro Java EE application server. Scalability at the application interface is provided by use of Hazelcast, an open source In-Memory Data Grid (IMDG) providing distributed in-memory computing and supporting the creation and clustering of new application interface instances as demand increases. The IMDG provides fast and efficient access to conditions data via in-memory caching.

  14. Hyperactivity in boys with attention-deficit/hyperactivity disorder (ADHD): The role of executive and non-executive functions.

    PubMed

    Hudec, Kristen L; Alderson, R Matt; Patros, Connor H G; Lea, Sarah E; Tarle, Stephanie J; Kasper, Lisa J

    2015-01-01

    Motor activity of boys (age 8-12 years) with (n=19) and without (n=18) ADHD was objectively measured with actigraphy across experimental conditions that varied with regard to demands on executive functions. Activity exhibited during two n-back (1-back, 2-back) working memory tasks was compared to activity during a choice-reaction time (CRT) task that placed relatively fewer demands on executive processes and during a simple reaction time (SRT) task that required mostly automatic processing with minimal executive demands. Results indicated that children in the ADHD group exhibited greater activity compared to children in the non-ADHD group. Further, both groups exhibited the greatest activity during conditions with high working memory demands, followed by the reaction time and control task conditions, respectively. The findings indicate that large-magnitude increases in motor activity are predominantly associated with increased demands on working memory, though demands on non-executive processes are sufficient to elicit small to moderate increases in motor activity as well. Published by Elsevier Ltd.

  15. FFTs in external or hierarchical memory

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1989-01-01

    A description is given of advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory. These algorithms (1) require as few as two passes through the external data set, (2) use strictly unit stride, long vector transfers between main memory and external storage, (3) require only a modest amount of scratch space in main memory, and (4) are well suited for vector and parallel computation. Performance figures are included for implementations of some of these algorithms on Cray supercomputers. Of interest is the fact that a main memory version outperforms the current Cray library FFT routines on the Cray-2, the Cray X-MP, and the Cray Y-MP systems. Using all eight processors on the Cray Y-MP, this main memory routine runs at nearly 2 Gflops.

  16. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    NASA Astrophysics Data System (ADS)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  17. Dielectric elastomer memory

    NASA Astrophysics Data System (ADS)

    O'Brien, Benjamin M.; McKay, Thomas G.; Xie, Sheng Q.; Calius, Emilio P.; Anderson, Iain A.

    2011-04-01

    Life shows us that the distribution of intelligence throughout flexible muscular networks is a highly successful solution to a wide range of challenges, for example: human hearts, octopi, or even starfish. Recreating this success in engineered systems requires soft actuator technologies with embedded sensing and intelligence. Dielectric Elastomer Actuator(s) (DEA) are promising due to their large stresses and strains, as well as quiet flexible multimodal operation. Recently dielectric elastomer devices were presented with built in sensor, driver, and logic capability enabled by a new concept called the Dielectric Elastomer Switch(es) (DES). DES use electrode piezoresistivity to control the charge on DEA and enable the distribution of intelligence throughout a DEA device. In this paper we advance the capabilities of DES further to form volatile memory elements. A set reset flip-flop with inverted reset line was developed based on DES and DEA. With a 3200V supply the flip-flop behaved appropriately and demonstrated the creation of dielectric elastomer memory capable of changing state in response to 1 second long set and reset pulses. This memory opens up applications such as oscillator, de-bounce, timing, and sequential logic circuits; all of which could be distributed throughout biomimetic actuator arrays. Future work will include miniaturisation to improve response speed, implementation into more complex circuits, and investigation of longer lasting and more sensitive switching materials.

  18. Comparing reactive and memory-one strategies of direct reciprocity

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.

    2016-05-01

    Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.

  19. Phosphatidylserine and the human brain.

    PubMed

    Glade, Michael J; Smith, Kyl

    2015-06-01

    The aim of this study was to assess the roles and importance of phosphatidylserine (PS), an endogenous phospholipid and dietary nutrient, in human brain biochemistry, physiology, and function. A scientific literature search was conducted on MEDLINE for relevant articles regarding PS and the human brain published before June 2014. Additional publications were identified from references provided in original papers; 127 articles were selected for inclusion in this review. A large body of scientific evidence describes the interactions among PS, cognitive activity, cognitive aging, and retention of cognitive functioning ability. Phosphatidylserine is required for healthy nerve cell membranes and myelin. Aging of the human brain is associated with biochemical alterations and structural deterioration that impair neurotransmission. Exogenous PS (300-800 mg/d) is absorbed efficiently in humans, crosses the blood-brain barrier, and safely slows, halts, or reverses biochemical alterations and structural deterioration in nerve cells. It supports human cognitive functions, including the formation of short-term memory, the consolidation of long-term memory, the ability to create new memories, the ability to retrieve memories, the ability to learn and recall information, the ability to focus attention and concentrate, the ability to reason and solve problems, language skills, and the ability to communicate. It also supports locomotor functions, especially rapid reactions and reflexes. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Comparing reactive and memory-one strategies of direct reciprocity

    PubMed Central

    Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.

    2016-01-01

    Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation. PMID:27161141

  1. Turbulent diffusion with memories and intrinsic shear

    NASA Technical Reports Server (NTRS)

    Tchen, C. M.

    1974-01-01

    The first part of the present theory is devoted to the derivation of a Fokker-Planck equation. The eddies smaller than the hydrodynamic scale of the diffusion cloud form a diffusivity, while the inhomogeneous, bigger eddies give rise to a nonuniform migratory drift. This introduces an eddy-induced shear which reflects on the large-scale diffusion. The eddy-induced shear does not require the presence of a permanent wind shear and is intrinsic to the diffusion. Secondly, a transport theory of diffusivity is developed by the method of repeated-cascade and is based upon a relaxation of a chain of memories with decreasing information. The full range of diffusion consists of inertia, composite, and shear subranges, for which variance and eddy diffusivities are predicted. The coefficients are evaluated. Comparison with experiments in the upper atmosphere and oceans is made.

  2. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  3. An ASIC memory buffer controller for a high speed disk system

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Campbell, Steve

    1993-01-01

    The need for large capacity, high speed mass memory storage devices has become increasingly evident at NASA during the past decade. High performance mass storage systems are crucial to present and future NASA systems. Spaceborne data storage system requirements have grown in response to the increasing amounts of data generated and processed by orbiting scientific experiments. Predictions indicate increases in the volume of data by orders of magnitude during the next decade. Current predictions are for storage capacities on the order of terabits (Tb), with data rates exceeding one gigabit per second (Gbps). As part of the design effort for a state of the art mass storage system, NASA Langley has designed a 144 CMOS ASIC to support high speed data transfers. This paper discusses the system architecture, ASIC design and some of the lessons learned in the development process.

  4. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring

    NASA Astrophysics Data System (ADS)

    Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico

    2018-04-01

    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.

  5. Elevated-Confined Phase-Change Random Access Memory Cells

    NASA Astrophysics Data System (ADS)

    Lee; Koon, Hock; Shi; Luping; Zhao; Rong; Yang; Hongxin; Lim; Guan, Kian; Li; Jianming; Chong; Chong, Tow

    2010-04-01

    A new elevated-confined phase-change random access memory (PCRAM) cell structure to reduce power consumption was proposed. In this proposed structure, the confined phase-change region is sitting on top of a small metal column enclosed by a dielectric at the sides. Hence, more heat can be effectively sustained underneath the phase-change region. As for the conventional structure, the confined phase-change region is sitting directly above a large planar bottom metal electrode, which can easily conduct most of the induced heat away. From simulations, a more uniform temperature profile around the active region and a higher peak temperature at the phase-change layer (PCL) in an elevated-confined structure were observed. Experimental results showed that the elevated-confined PCRAM cell requires a lower programming power and has a better scalability than a conventional confined PCRAM cell.

  6. Striatal degeneration impairs language learning: evidence from Huntington's disease.

    PubMed

    De Diego-Balaguer, R; Couette, M; Dolbeau, G; Dürr, A; Youssov, K; Bachoud-Lévi, A-C

    2008-11-01

    Although the role of the striatum in language processing is still largely unclear, a number of recent proposals have outlined its specific contribution. Different studies report evidence converging to a picture where the striatum may be involved in those aspects of rule-application requiring non-automatized behaviour. This is the main characteristic of the earliest phases of language acquisition that require the online detection of distant dependencies and the creation of syntactic categories by means of rule learning. Learning of sequences and categorization processes in non-language domains has been known to require striatal recruitment. Thus, we hypothesized that the striatum should play a prominent role in the extraction of rules in learning a language. We studied 13 pre-symptomatic gene-carriers and 22 early stage patients of Huntington's disease (pre-HD), both characterized by a progressive degeneration of the striatum and 21 late stage patients Huntington's disease (18 stage II, two stage III and one stage IV) where cortical degeneration accompanies striatal degeneration. When presented with a simplified artificial language where words and rules could be extracted, early stage Huntington's disease patients (stage I) were impaired in the learning test, demonstrating a greater impairment in rule than word learning compared to the 20 age- and education-matched controls. Huntington's disease patients at later stages were impaired both on word and rule learning. While spared in their overall performance, gene-carriers having learned a set of abstract artificial language rules were then impaired in the transfer of those rules to similar artificial language structures. The correlation analyses among several neuropsychological tests assessing executive function showed that rule learning correlated with tests requiring working memory and attentional control, while word learning correlated with a test involving episodic memory. These learning impairments significantly correlated with the bicaudate ratio. The overall results support striatal involvement in rule extraction from speech and suggest that language acquisition requires several aspects of memory and executive functions for word and rule learning.

  7. Time Constraints and Resource Sharing in Adults' Working Memory Spans

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Bernardin, Sophie; Camos, Valerie

    2004-01-01

    This article presents a new model that accounts for working memory spans in adults, the time-based resource-sharing model. The model assumes that both components (i.e., processing and maintenance) of the main working memory tasks require attention and that memory traces decay as soon as attention is switched away. Because memory retrievals are…

  8. Functional Integrity of the Retrosplenial Cortex Is Essential for Rapid Consolidation and Recall of Fear Memory

    ERIC Educational Resources Information Center

    Katche, Cynthia; Dorman, Guido; Slipczuk, Leandro; Cammarota, Martin; Medina, Jorge H.

    2013-01-01

    Memory storage is a temporally graded process involving different phases and different structures in the mammalian brain. Cortical plasticity is essential to store stable memories, but little is known regarding its involvement in memory processing. Here we show that fear memory consolidation requires early post-training macromolecular synthesis in…

  9. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  10. Optically simulating a quantum associative memory

    NASA Astrophysics Data System (ADS)

    Howell, John C.; Yeazell, John A.; Ventura, Dan

    2000-10-01

    This paper discusses the realization of a quantum associative memory using linear integrated optics. An associative memory produces a full pattern of bits when presented with only a partial pattern. Quantum computers have the potential to store large numbers of patterns and hence have the ability to far surpass any classical neural-network realization of an associative memory. In this work two three-qubit associative memories will be discussed using linear integrated optics. In addition, corrupted, invented and degenerate memories are discussed.

  11. Recollection-Dependent Memory for Event Duration in Large-Scale Spatial Navigation

    ERIC Educational Resources Information Center

    Brunec, Iva K.; Ozubko, Jason D.; Barense, Morgan D.; Moscovitch, Morris

    2017-01-01

    Time and space represent two key aspects of episodic memories, forming the spatiotemporal context of events in a sequence. Little is known, however, about how temporal information, such as the duration and the order of particular events, are encoded into memory, and if it matters whether the memory representation is based on recollection or…

  12. Both a Nicotinic Single Nucleotide Polymorphism (SNP) and a Noradrenergic SNP Modulate Working Memory Performance when Attention Is Manipulated

    ERIC Educational Resources Information Center

    Greenwood, Pamela M.; Sundararajan, Ramya; Lin, Ming-Kuan; Kumar, Reshma; Fryxell, Karl J.; Parasuraman, Raja

    2009-01-01

    We investigated the relation between the two systems of visuospatial attention and working memory by examining the effect of normal variation in cholinergic and noradrenergic genes on working memory performance under attentional manipulation. We previously reported that working memory for location was impaired following large location precues,…

  13. Phonological Memory and the Acquisition of Grammar in Child L2 Learners

    ERIC Educational Resources Information Center

    Verhagen, Josje; Leseman, Paul; Messer, Marielle

    2015-01-01

    Previous studies show that second language (L2) learners with large phonological memory spans outperform learners with smaller memory spans on tests of L2 grammar. The current study investigated the relationship between phonological memory and L2 grammar in more detail than has been done earlier. Specifically, we asked how phonological memory…

  14. Frequency Monitoring: A Methodology for Assessing the Organization of Information

    DTIC Science & Technology

    1988-08-01

    Memory & Cognition, 6, 410-415. Tulving, E. Episodic and semantic memory (1972). In E. Tulving & W. Donaldson (Ed.), Organization and memory . New...are stored in episodic memory (Tulving, 1972). These global-level memory units enable people to make important decisions about such significant... semantically similar. However, as indicated earlier, an advantage of frequency-estimation tests of memory is that they do not require the presentation of

  15. Elastocaloric effect in poly(vinylidene fluoride-trifluoroethylene-chlorotrifluoroethylene) terpolymer

    NASA Astrophysics Data System (ADS)

    Yoshida, Yukihiro; Yuse, Kaori; Guyomar, Daniel; Capsal, Jean-Fabien; Sebald, Gael

    2016-06-01

    The elastocaloric properties of poly (vinylidene fluoride-trifluoroethylene-chlorotrifluoroethylene) [P(VDF-TrFE-CTFE)] terpolymer were directly characterized using an infrared imaging camera. At a strain of 12%, a reversible adiabatic temperature variation of 2.15 °C was measured, corresponding to an isothermal entropy variation of 21.5 kJ m-3 K-1 or 11 J kg-1 K-1. In comparison with other elastocaloric materials, P(VDF-TrFE-CTFE) appears to represent a trade-off between the large required stresses in shape memory alloys and the large required strains in natural rubber. The internal energy of the P(VDF-TrFE-CTFE) polymer was found to be independent of the strain, resulting in complete conversion of the mechanical work into heat, as for pure elastomeric materials. The elastocaloric effect therefore originates from a pure entropic elasticity, which is likely to be related to the amorphous phase of the polymer only.

  16. KCNQ Channels Regulate Age-Related Memory Impairment

    PubMed Central

    Cavaliere, Sonia; Malik, Bilal R.; Hodge, James J. L.

    2013-01-01

    In humans KCNQ2/3 heteromeric channels form an M-current that acts as a brake on neuronal excitability, with mutations causing a form of epilepsy. The M-current has been shown to be a key regulator of neuronal plasticity underlying associative memory and ethanol response in mammals. Previous work has shown that many of the molecules and plasticity mechanisms underlying changes in alcohol behaviour and addiction are shared with those of memory. We show that the single KCNQ channel in Drosophila (dKCNQ) when mutated show decrements in associative short- and long-term memory, with KCNQ function in the mushroom body α/βneurons being required for short-term memory. Ethanol disrupts memory in wildtype flies, but not in a KCNQ null mutant background suggesting KCNQ maybe a direct target of ethanol, the blockade of which interferes with the plasticity machinery required for memory formation. We show that as in humans, Drosophila display age-related memory impairment with the KCNQ mutant memory defect mimicking the effect of age on memory. Expression of KCNQ normally decreases in aging brains and KCNQ overexpression in the mushroom body neurons of KCNQ mutants restores age-related memory impairment. Therefore KCNQ is a central plasticity molecule that regulates age dependent memory impairment. PMID:23638087

  17. The ERM protein Moesin is essential for neuronal morphogenesis and long-term memory in Drosophila.

    PubMed

    Freymuth, Patrick S; Fitzsimons, Helen L

    2017-08-29

    Moesin is a cytoskeletal adaptor protein that plays an important role in modification of the actin cytoskeleton. Rearrangement of the actin cytoskeleton drives both neuronal morphogenesis and the structural changes in neurons that are required for long-term memory formation. Moesin has been identified as a candidate memory gene in Drosophila, however, whether it is required for memory formation has not been evaluated. Here, we investigate the role of Moesin in neuronal morphogenesis and in short- and long-term memory formation in the courtship suppression assay, a model of associative memory. We found that both knockdown and overexpression of Moesin led to defects in axon growth and guidance as well as dendritic arborization. Moreover, reduction of Moesin expression or expression of a constitutively active phosphomimetic in the adult Drosophila brain had no effect on short term memory, but prevented long-term memory formation, an effect that was independent of its role in development. These results indicate a critical role for Moesin in both neuronal morphogenesis and long-term memory formation.

  18. Differential Involvement of Brain-Derived Neurotrophic Factor in Reconsolidation and Consolidation of Conditioned Taste Aversion Memory

    PubMed Central

    Wang, Yue; Zhang, Tian-Yi; Xin, Jian; Li, Ting; Yu, Hui; Li, Na; Chen, Zhe-Yu

    2012-01-01

    Consolidated memory can re-enter states of transient instability following reactivation, which is referred to as reconsolidation, and the exact molecular mechanisms underlying this process remain unexplored. Brain-derived neurotrophic factor (BDNF) plays a critical role in synaptic plasticity and memory processes. We have recently observed that BDNF signaling in the central nuclei of the amygdala (CeA) and insular cortex (IC) was involved in the consolidation of conditioned taste aversion (CTA) memory. However, whether BDNF in the CeA or IC is required for memory reconsolidation is still unclear. In the present study, using a CTA memory paradigm, we observed increased BDNF expression in the IC but not in the CeA during CTA reconsolidation. We further determined that BDNF synthesis and signaling in the IC but not in the CeA was required for memory reconsolidation. The differential, spatial-specific roles of BDNF in memory consolidation and reconsolidation suggest that dissociative molecular mechanisms underlie reconsolidation and consolidation, which might provide novel targets for manipulating newly encoded and reactivated memories without causing universal amnesia. PMID:23185492

  19. Medial prefrontal-hippocampal connectivity during emotional memory encoding predicts individual differences in the loss of associative memory specificity.

    PubMed

    Berkers, Ruud M W J; Klumpers, Floris; Fernández, Guillén

    2016-10-01

    Emotionally charged items are often remembered better, whereas a paradoxical loss of specificity is found for associative emotional information (specific memory). The balance between specific and generalized emotional memories appears to show large individual differences, potentially related to differences in (the risk for) affective disorders that are characterized by 'overgeneralized' emotional memories. Here, we investigate the neural underpinnings of individual differences in emotional associative memory. A large group of healthy male participants were scanned while encoding associations of face-photographs and written occupational identities that were of either neutral ('driver') or negative ('murderer') valence. Subsequently, memory was tested by prompting participants to retrieve the occupational identities corresponding to each face. Whereas in both valence categories a similar amount of faces was labeled correctly with 'neutral' and 'negative' identities, (gist memory), specific associations were found to be less accurately remembered when the occupational identity was negative compared to neutral (specific memory). This pattern of results suggests reduced memory specificity for associations containing a negatively valenced component. The encoding of these negative associations was paired with a selective increase in medial prefrontal cortex activity and medial prefrontal-hippocampal connectivity. Individual differences in valence-specific neural connectivity were predictive of valence-specific reduction of memory specificity. The relationship between loss of emotional memory specificity and medial prefrontal-hippocampal connectivity is in line with the hypothesized role of a medial prefrontal-hippocampal circuit in regulating memory specificity, and warrants further investigations in individuals displaying 'overgeneralized' emotional memories. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Sourouri, Mohammed; Birger Raknes, Espen

    2017-04-01

    In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.

Top