Don’t make cache too complex: A simple probability-based cache management scheme for SSDs
Cho, Sangyeun; Choi, Jongmoo
2017-01-01
Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme. PMID:28358897
Don't make cache too complex: A simple probability-based cache management scheme for SSDs.
Baek, Seungjae; Cho, Sangyeun; Choi, Jongmoo
2017-01-01
Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.
Resource Management Scheme Based on Ubiquitous Data Analysis
Lee, Heung Ki; Jung, Jaehee
2014-01-01
Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo
2018-02-01
Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Havens: Explicit Reliable Memory Regions for HPC Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
2016-01-01
Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less
A Memory Efficient Network Encryption Scheme
NASA Astrophysics Data System (ADS)
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
Compiler-directed cache management in multiprocessors
NASA Technical Reports Server (NTRS)
Cheong, Hoichi; Veidenbaum, Alexander V.
1990-01-01
The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.
NASA Astrophysics Data System (ADS)
Miyaji, Kousuke; Sun, Chao; Soga, Ayumi; Takeuchi, Ken
2014-01-01
A relational database management system (RDBMS) is designed based on NAND flash solid-state drive (SSD) for storage. By vertically integrating the storage engine (SE) and the flash translation layer (FTL), system performance is maximized and the internal SSD overhead is minimized. The proposed RDBMS SE utilizes physical information about the NAND flash memory which is supplied from the FTL. The query operation is also optimized for SSD. By these treatments, page-copy-less garbage collection is achieved and data fragmentation in the NAND flash memory is suppressed. As a result, RDBMS performance increases by 3.8 times, power consumption of SSD decreases by 46% and SSD life time is increased by 61%. The effectiveness of the proposed scheme increases with larger erase block sizes, which matches the future scaling trend of three-dimensional (3D-) NAND flash memories. The preferable row data size of the proposed scheme is below 500 byte for 16 kbyte page size.
Qin, Zhongyuan; Zhang, Xinshuai; Feng, Kerong; Zhang, Qunfang; Huang, Jie
2014-01-01
With the rapid development and widespread adoption of wireless sensor networks (WSNs), security has become an increasingly prominent problem. How to establish a session key in node communication is a challenging task for WSNs. Considering the limitations in WSNs, such as low computing capacity, small memory, power supply limitations and price, we propose an efficient identity-based key management (IBKM) scheme, which exploits the Bloom filter to authenticate the communication sensor node with storage efficiency. The security analysis shows that IBKM can prevent several attacks effectively with acceptable computation and communication overhead. PMID:25264955
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Bender, S.R.
Most fuzzy logic-based reasoning schemes developed for robot control are fully reactive, i.e., the reasoning modules consist of fuzzy rule bases that represent direct mappings from the stimuli provided by the perception systems to the responses implemented by the motion controllers. Due to their totally reactive nature, such reasoning systems can encounter problems such as infinite loops and limit cycles. In this paper, we proposed an approach to remedy these problems by adding a memory and memory-related behaviors to basic reactive systems. Three major types of memory behaviors are addressed: memory creation, memory management, and memory utilization. These are firstmore » presented, and examples of their implementation for the recognition of limit cycles during the navigation of an autonomous robot in a priori unknown environments are then discussed.« less
[CMACPAR an modified parallel neuro-controller for control processes].
Ramos, E; Surós, R
1999-01-01
CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.
Architecture of security management unit for safe hosting of multiple agents
NASA Astrophysics Data System (ADS)
Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques
1999-04-01
In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.
Setting a disordered password on a photonic memory
NASA Astrophysics Data System (ADS)
Su, Shih-Wei; Gou, Shih-Chuan; Chew, Lock Yue; Chang, Yu-Yen; Yu, Ite A.; Kalachev, Alexey; Liao, Wen-Te
2017-06-01
An all-optical method of setting a disordered password on different schemes of photonic memory is theoretically studied. While photons are regarded as ideal information carriers, it is imperative to implement such data protection on all-optical storage. However, we wish to address the intrinsic risk of data breaches in existing schemes of photonic memory. We theoretically demonstrate a protocol using spatially disordered laser fields to encrypt data stored on an optical memory, namely, encrypted photonic memory. To address the broadband storage, we also investigate a scheme of disordered echo memory with a high fidelity approaching unity. The proposed method increases the difficulty for the eavesdropper to retrieve the stored photon without the preset password even when the randomized and stored photon state is nearly perfectly cloned. Our results pave ways to significantly reduce the exposure of memories, required for long-distance communication, to eavesdropping and therefore restrict the optimal attack on communication protocols. The present scheme also increases the sensitivity of detecting any eavesdropper and so raises the security level of photonic information technology.
DARPA Status Report - November 1988
1988-11-01
style used in the applic4#ons reference to that block was by processor j. where j It. We was influenced by it. MACH is a multiprocessor operating S call...it can be order they occurred. However. the exact time at which the treated specially in memory management , and so most of the reference wa, made is...on cache consistency performance, sophisti- peak can be explained as clinging references that occur when cated cache management schemes that take
Wavelength assignment algorithm considering the state of neighborhood links for OBS networks
NASA Astrophysics Data System (ADS)
Tanaka, Yu; Hirota, Yusuke; Tode, Hideki; Murakami, Koso
2005-10-01
Recently, Optical WDM technology is introduced into backbone networks. On the other hand, as the future optical switching scheme, Optical Burst Switching (OBS) systems become a realistic solution. OBS systems do not consider buffering in intermediate nodes. Thus, it is an important issue to avoid overlapping wavelength reservation between partially interfered paths. To solve this problem, so far, the wavelength assignment scheme which has priority management tables has been proposed. This method achieves the reduction of burst blocking probability. However, this priority management table requires huge memory space. In this paper, we propose a wavelength assignment algorithm that reduces both the number of priority management tables and burst blocking probability. To reduce priority management tables, we allocate and manage them for each link. To reduce burst blocking probability, our method announces information about the change of their priorities to intermediate nodes. We evaluate its performance in terms of the burst blocking probability and the reduction rate of priority management tables.
Coherent storage of temporally multimode light using a spin-wave atomic frequency comb memory
NASA Astrophysics Data System (ADS)
Gündoǧan, M.; Mazzera, M.; Ledingham, P. M.; Cristiani, M.; de Riedmatten, H.
2013-04-01
We report on the coherent and multi-temporal mode storage of light using the full atomic frequency comb memory scheme. The scheme involves the transfer of optical atomic excitations in Pr3+:Y2SiO5 to spin waves in hyperfine levels using strong single-frequency transfer pulses. Using this scheme, a total of five temporal modes are stored and recalled on-demand from the memory. The coherence of the storage and retrieval is characterized using a time-bin interference measurement resulting in visibilities higher than 80%, independent of the storage time. This coherent and multimode spin-wave memory is promising as a quantum memory for light.
Fast Initialization of Bubble-Memory Systems
NASA Technical Reports Server (NTRS)
Looney, K. T.; Nichols, C. D.; Hayes, P. J.
1986-01-01
Improved scheme several orders of magnitude faster than normal initialization scheme. State-of-the-art commercial bubble-memory device used. Hardware interface designed connects controlling microprocessor to bubblememory circuitry. System software written to exercise various functions of bubble-memory system in comparison made between normal and fast techniques. Future implementations of approach utilize E2PROM (electrically-erasable programable read-only memory) to provide greater system flexibility. Fastinitialization technique applicable to all bubble-memory devices.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
PCM-Based Durable Write Cache for Fast Disk I/O
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhuo; Wang, Bin; Carpenter, Patrick
2012-01-01
Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less
A cache-aided multiprocessor rollback recovery scheme
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent
1989-01-01
This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.
An adaptive replacement algorithm for paged-memory computer systems.
NASA Technical Reports Server (NTRS)
Thorington, J. M., Jr.; Irwin, J. D.
1972-01-01
A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.
NASA Astrophysics Data System (ADS)
Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin
2017-08-01
In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.
Energy-efficient writing scheme for magnetic domain-wall motion memory
NASA Astrophysics Data System (ADS)
Kim, Kab-Jin; Yoshimura, Yoko; Ham, Woo Seung; Ernst, Rick; Hirata, Yuushou; Li, Tian; Kim, Sanghoon; Moriyama, Takahiro; Nakatani, Yoshinobu; Ono, Teruo
2017-04-01
We present an energy-efficient magnetic domain-writing scheme for domain wall (DW) motion-based memory devices. A cross-shaped nanowire is employed to inject a domain into the nanowire through current-induced DW propagation. The energy required for injecting the magnetic domain is more than one order of magnitude lower than that for the conventional field-based writing scheme. The proposed scheme is beneficial for device miniaturization because the threshold current for DW propagation scales with the device size, which cannot be achieved in the conventional field-based technique.
Realization of the revival of silenced echo (ROSE) quantum memory scheme in orthogonal geometry
NASA Astrophysics Data System (ADS)
Minnegaliev, M. M.; Gerasimov, K. I.; Urmancheev, R. V.; Moiseev, S. A.; Chanelière, T.; Louchet-Chauvet, A.
2018-02-01
We demonstrated quantum memory scheme on revival of silenced echo in orthogonal geometry in Tm3+: Y3Al5O12 crystal. The retrieval efficiency of ˜14% was demonstrated with the 36 µs storage time. In this scheme for the first time we also implemented a suppression of the revived echo signal by applying an external electric field and the echo signal has been recovered on demand if we then applied a second electric pulse with opposite polarity. This technique opens the possibilities for realizing addressing in multi-qubit quantum memory in Tm3+: Y3Al5O12 crystal.
Experimental evaluation of multiprocessor cache-based error recovery
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. K.
1991-01-01
Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.
Study on advanced information processing system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1992-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
An Efficient Means of Adaptive Refinement Within Systems of Overset Grids
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.
A controlled ac Stark echo for quantum memories.
Ham, Byoung S
2017-08-09
A quantum memory protocol of controlled ac Stark echoes (CASE) based on a double rephasing photon echo scheme via controlled Rabi flopping is proposed. The double rephasing scheme of photon echoes inherently satisfies the no-population inversion requirement for quantum memories, but the resultant absorptive echo remains a fundamental problem. Herein, it is reported that the first echo in the double rephasing scheme can be dynamically controlled so that it does not affect the second echo, which is accomplished by using unbalanced ac Stark shifts. Then, the second echo is coherently controlled to be emissive via controlled coherence conversion. Finally a near perfect ultralong CASE is presented using a backward echo scheme. Compared with other methods such as dc Stark echoes, the present protocol is all-optical with advantages of wavelength-selective dynamic control of quantum processing for erasing, buffering, and channel multiplexing.
NASA Astrophysics Data System (ADS)
Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook
2018-02-01
To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.
NASA Technical Reports Server (NTRS)
1981-01-01
Presentations of a conference on the use of ruggedized minicomputers are summarized. The following topics are discussed: (1) the role of minicomputers in the development and/or certification of commercial or military airplanes in both the United States and Europe; (2) generalized software error detection techniques; (3) real time software development tools; (4) a redundancy management research tool for aircraft navigation/flight control sensors; (5) extended memory management techniques using a high order language; and (6) some comments on establishing a system maintenance scheme. Copies of presentation slides are also included.
Memory-assisted quantum key distribution resilient against multiple-excitation effects
NASA Astrophysics Data System (ADS)
Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen
2018-01-01
Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.
Design of a Variational Multiscale Method for Turbulent Compressible Flows
NASA Technical Reports Server (NTRS)
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.
Broadband multiresonator quantum memory-interface.
Moiseev, S A; Gerasimov, K I; Latypov, R R; Perminov, N S; Petrovnin, K V; Sherstyukov, O N
2018-03-05
In this paper we experimentally demonstrated a broadband scheme of the multiresonator quantum memory-interface. The microwave photonic scheme consists of the system of mini-resonators strongly interacting with a common broadband resonator coupled with the external waveguide. We have implemented the impedance matched quantum storage in this scheme via controllable tuning of the mini-resonator frequencies and coupling of the common resonator with the external waveguide. Proof-of-principal experiment has been demonstrated for broadband microwave pulses when the quantum efficiency of 16.3% was achieved at room temperature. By using the obtained experimental spectroscopic data, the dynamics of the signal retrieval has been simulated and promising results were found for high-Q mini-resonators in microwave and optical frequency ranges. The results pave the way for the experimental implementation of broadband quantum memory-interface with quite high efficiency η > 0.99 on the basis of modern technologies, including optical quantum memory at room temperature.
Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A.; Chen, Ying-Cheng
2018-05-01
Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.
Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency.
Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A; Chen, Ying-Cheng
2018-05-04
Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.
Performance of hashed cache data migration schemes on multicomputers
NASA Technical Reports Server (NTRS)
Hiranandani, Seema; Saltz, Joel; Mehrotra, Piyush; Berryman, Harry
1991-01-01
After conducting an examination of several data-migration mechanisms which permit an explicit and controlled mapping of data to memory, a set of schemes for storage and retrieval of off-processor array elements is experimentally evaluated and modeled. All schemes considered have their basis in the use of hash tables for efficient access of nonlocal data. The techniques in question are those of hashed cache, partial enumeration, and full enumeration; in these, nonlocal data are stored in hash tables, so that the operative difference lies in the amount of memory used by each scheme and in the retrieval mechanism used for nonlocal data.
NASA Astrophysics Data System (ADS)
Gaudreau, Louis; Bogan, Alex; Korkusinski, Marek; Studenikin, Sergei; Austing, D. Guy; Sachrajda, Andrew S.
2017-09-01
Long distance entanglement distribution is an important problem for quantum information technologies to solve. Current optical schemes are known to have fundamental limitations. A coherent photon-to-spin interface built with quantum dots (QDs) in a direct bandgap semiconductor can provide a solution for efficient entanglement distribution. QD circuits offer integrated spin processing for full Bell state measurement (BSM) analysis and spin quantum memory. Crucially the photo-generated spins can be heralded by non-destructive charge detection techniques. We review current schemes to transfer a polarization-encoded state or a time-bin-encoded state of a photon to the state of a spin in a QD. The spin may be that of an electron or that of a hole. We describe adaptations of the original schemes to employ heavy holes which have a number of attractive properties including a g-factor that is tunable to zero for QDs in an appropriately oriented external magnetic field. We also introduce simple throughput scaling models to demonstrate the potential performance advantage of full BSM capability in a QD scheme, even when the quantum memory is imperfect, over optical schemes relying on linear optical elements and ensemble quantum memories.
2006-09-01
Umj) flj + GjE(Umj)flyjI A S + fS do (3.7)I This system (3.6) is integrated in time using explicit low-memory Runge-Kutta method: I U o=U" Ui =UO - ci At...signals are registered by the four-channel digital memory oscilloscopes Tektronix TDS 2414 and ASK 3107. Scheme of operation The scheme of the experiment is
Role of memory errors in quantum repeaters
NASA Astrophysics Data System (ADS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.
2007-03-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
SKIRT: Hybrid parallelization of radiative transfer simulations
NASA Astrophysics Data System (ADS)
Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.
2017-07-01
We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.
Molecular implementation of molecular shift register memories
NASA Technical Reports Server (NTRS)
Beratan, David N. (Inventor); Onuchic, Jose N. (Inventor)
1991-01-01
An electronic shift register memory (20) at the molecular level is described. The memory elements are based on a chain of electron transfer molecules (22) and the information is shifted by photoinduced (26) electron transfer reactions. Thus, multi-step sequences of charge transfer reactions are used to move charge with high efficiency down a molecular chain. The device integrates compositions of the invention onto a VLSI substrate (36), providing an example of a molecular electronic device which may be fabricated. Three energy level schemes, molecular implementation of these schemes, optical excitation strategies, charge amplification strategies, and error correction strategies are described.
Out-of-Core Streamline Visualization on Large Unstructured Meshes
NASA Technical Reports Server (NTRS)
Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu
1997-01-01
It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.
A digital memories based user authentication scheme with privacy preservation.
Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang
2017-01-01
The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.
A digital memories based user authentication scheme with privacy preservation
Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang
2017-01-01
The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users’ privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results. PMID:29190659
Realization of Quantum Digital Signatures without the Requirement of Quantum Memory
NASA Astrophysics Data System (ADS)
Collins, Robert J.; Donaldson, Ross J.; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J.; Andersson, Erika; Jeffers, John; Buller, Gerald S.
2014-07-01
Digital signatures are widely used to provide security for electronic communications, for example, in financial transactions and electronic mail. Currently used classical digital signature schemes, however, only offer security relying on unproven computational assumptions. In contrast, quantum digital signatures offer information-theoretic security based on laws of quantum mechanics. Here, security against forging relies on the impossibility of perfectly distinguishing between nonorthogonal quantum states. A serious drawback of previous quantum digital signature schemes is that they require long-term quantum memory, making them impractical at present. We present the first realization of a scheme that does not need quantum memory and which also uses only standard linear optical components and photodetectors. In our realization, the recipients measure the distributed quantum signature states using a new type of quantum measurement, quantum state elimination. This significantly advances quantum digital signatures as a quantum technology with potential for real applications.
Realization of quantum digital signatures without the requirement of quantum memory.
Collins, Robert J; Donaldson, Ross J; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J; Andersson, Erika; Jeffers, John; Buller, Gerald S
2014-07-25
Digital signatures are widely used to provide security for electronic communications, for example, in financial transactions and electronic mail. Currently used classical digital signature schemes, however, only offer security relying on unproven computational assumptions. In contrast, quantum digital signatures offer information-theoretic security based on laws of quantum mechanics. Here, security against forging relies on the impossibility of perfectly distinguishing between nonorthogonal quantum states. A serious drawback of previous quantum digital signature schemes is that they require long-term quantum memory, making them impractical at present. We present the first realization of a scheme that does not need quantum memory and which also uses only standard linear optical components and photodetectors. In our realization, the recipients measure the distributed quantum signature states using a new type of quantum measurement, quantum state elimination. This significantly advances quantum digital signatures as a quantum technology with potential for real applications.
Room Temperature Memory for Few Photon Polarization Qubits
NASA Astrophysics Data System (ADS)
Kupchak, Connor; Mittiga, Thomas; Jordan, Bertus; Nazami, Mehdi; Nolleke, Christian; Figueroa, Eden
2014-05-01
We have developed a room temperature quantum memory device based on Electromagnetically Induced Transparency capable of reliably storing and retrieving polarization qubits on the few photon level. Our system is realized in a vapor of 87Rb atoms utilizing a Λ-type energy level scheme. We create a dual-rail storage scheme mediated by an intense control field to allow storage and retrieval of any arbitrary polarization state. Upon retrieval, we employ a filtering system to sufficiently remove the strong pump field, and subject retrieved light states to polarization tomography. To date, our system has produced signal-to-noise ratios near unity with a memory fidelity of >80 % using coherent state qubits containing four photons on average. Our results thus demonstrate the feasibility of room temperature systems for the storage of single-photon-level photonic qubits. Such room temperature systems will be attractive for future long distance quantum communication schemes.
Efficient entanglement distillation without quantum memory.
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman
2016-05-31
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.
Efficient entanglement distillation without quantum memory
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman
2016-01-01
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946
Precision spectral manipulation of optical pulses using a coherent photon echo memory.
Buchler, B C; Hosseini, M; Hétet, G; Sparkes, B M; Lam, P K
2010-04-01
Photon echo schemes are excellent candidates for high efficiency coherent optical memory. They are capable of high-bandwidth multipulse storage, pulse resequencing and have been shown theoretically to be compatible with quantum information applications. One particular photon echo scheme is the gradient echo memory (GEM). In this system, an atomic frequency gradient is induced in the direction of light propagation leading to a Fourier decomposition of the optical spectrum along the length of the storage medium. This Fourier encoding allows precision spectral manipulation of the stored light. In this Letter, we show frequency shifting, spectral compression, spectral splitting, and fine dispersion control of optical pulses using GEM.
NASA Technical Reports Server (NTRS)
Tuccillo, J. J.
1984-01-01
Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.
Operating systems. [of computers
NASA Technical Reports Server (NTRS)
Denning, P. J.; Brown, R. L.
1984-01-01
A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.
Design of on-board parallel computer on nano-satellite
NASA Astrophysics Data System (ADS)
You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li
2007-11-01
This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
Holographic memory system based on projection recording of computer-generated 1D Fourier holograms.
Betin, A Yu; Bobrinev, V I; Donchenko, S S; Odinokov, S B; Evtikhiev, N N; Starikov, R S; Starikov, S N; Zlokazov, E Yu
2014-10-01
Utilization of computer generation of holographic structures significantly simplifies the optical scheme that is used to record the microholograms in a holographic memory record system. Also digital holographic synthesis allows to account the nonlinear errors of the record system to improve the microholograms quality. The multiplexed record of holograms is a widespread technique to increase the data record density. In this article we represent the holographic memory system based on digital synthesis of amplitude one-dimensional (1D) Fourier transform holograms and the multiplexed record of these holograms onto the holographic carrier using optical projection scheme. 1D Fourier transform holograms are very sensitive to orientation of the anamorphic optical element (cylindrical lens) that is required for encoded data object reconstruction. The multiplex record of several holograms with different orientation in an optical projection scheme allowed reconstruction of the data object from each hologram by rotating the cylindrical lens on the corresponding angle. Also, we discuss two optical schemes for the recorded holograms readout: a full-page readout system and line-by-line readout system. We consider the benefits of both systems and present the results of experimental modeling of 1D Fourier holograms nonmultiplex and multiplex record and reconstruction.
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-25
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogel, Jaron T.; Reboredo, Fernando A.
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
NASA Technical Reports Server (NTRS)
Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.
2007-01-01
The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.
Face recognition by applying wavelet subband representation and kernel associative memory.
Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam
2004-01-01
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.
Coherence rephasing combined with spin-wave storage using chirped control pulses
NASA Astrophysics Data System (ADS)
Demeter, Gabor
2014-06-01
Photon-echo based optical quantum memory schemes often employ intermediate steps to transform optical coherences to spin coherences for longer storage times. We analyze a scheme that uses three identical chirped control pulses for coherence rephasing in an inhomogeneously broadened ensemble of three-level Λ systems. The pulses induce a cyclic permutation of the atomic populations in the adiabatic regime. Optical coherences created by a signal pulse are stored as spin coherences at an intermediate time interval, and are rephased for echo emission when the ensemble is returned to the initial state. Echo emission during a possible partial rephasing when the medium is inverted can be suppressed with an appropriate choice of control pulse wave vectors. We demonstrate that the scheme works in an optically dense ensemble, despite control pulse distortions during propagation. It integrates conveniently the spin-wave storage step into memory schemes based on a second rephasing of the atomic coherences.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Zhang, Zhao
With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less
NASA Technical Reports Server (NTRS)
Li, Yue (Inventor); Bruck, Jehoshua (Inventor)
2018-01-01
A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
NASA Astrophysics Data System (ADS)
Collins, Robert J.; Donaldon, Ross J.; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J.; Andersson, Erika; Jeffers, John; Buller, Gerald S.
2014-10-01
Classical digital signatures are commonly used in e-mail, electronic financial transactions and other forms of electronic communications to ensure that messages have not been tampered with in transit, and that messages are transferrable. The security of commonly used classical digital signature schemes relies on the computational difficulty of inverting certain mathematical functions. However, at present, there are no such one-way functions which have been proven to be hard to invert. With enough computational resources certain implementations of classical public key cryptosystems can be, and have been, broken with current technology. It is nevertheless possible to construct information-theoretically secure signature schemes, including quantum digital signature schemes. Quantum signature schemes can be made information theoretically secure based on the laws of quantum mechanics, while classical comparable protocols require additional resources such as secret communication and a trusted authority. Early demonstrations of quantum digital signatures required quantum memory, rendering them impractical at present. Our present implementation is based on a protocol that does not require quantum memory. It also uses the new technique of unambiguous quantum state elimination, Here we report experimental results for a test-bed system, recorded with a variety of different operating parameters, along with a discussion of aspects of the system security.
Reducing the PAPR in FBMC-OQAM systems with low-latency trellis-based SLM technique
NASA Astrophysics Data System (ADS)
Bulusu, S. S. Krishna Chaitanya; Shaiek, Hmaied; Roviras, Daniel
2016-12-01
Filter-bank multi-carrier (FBMC) modulations, and more specifically FBMC-offset quadrature amplitude modulation (OQAM), are seen as an interesting alternative to orthogonal frequency division multiplexing (OFDM) for the 5th generation radio access technology. In this paper, we investigate the problem of peak-to-average power ratio (PAPR) reduction for FBMC-OQAM signals. Recently, it has been shown that FBMC-OQAM with trellis-based selected mapping (TSLM) scheme not only is superior to any scheme based on symbol-by-symbol approach but also outperforms that of the OFDM with classical SLM scheme. This paper is an extension of that work, where we analyze the TSLM in terms of computational complexity, required hardware memory, and latency issues. We have proposed an improvement to the TSLM, which requires very less hardware memory, compared to the originally proposed TSLM, and also have low latency. Additionally, the impact of the time duration of partial PAPR on the performance of TSLM is studied, and its lower bound has been identified by proposing a suitable time duration. Also, a thorough and fair comparison of performance has been done with an existing trellis-based scheme proposed in literature. The simulation results show that the proposed low-latency TSLM yields better PAPR reduction performance with relatively less hardware memory requirements.
SX User's Manual for SX version 2. 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.; Braddy, D.
1993-01-04
Scheme is a lexically scoped, properly tail recursive dialect of the LISP programming language. The PACT implementation is described abstractly in Abelson and Sussman's book, Structure and Interpretation of Computer Programs. It features all of the essential procedures'' described in the Revised Report on Scheme'' which defines the standard for Scheme. In PACT, Scheme is implemented as a library; however, a small driver delivers a stand alone Scheme interpreter. The PACT implementation features a reference counting incremental garbage collector. This distributes the overhead of memory management throughout the running of Scheme code. It also tends to keep Scheme from tryingmore » to grab the entire machine on which it is running which some garbage collection schemes will attempt to do. SX is perhaps the ultimate PACT statement. It is simply Scheme plus the other parts of PACT. A more precise way to describe it is as a dialect of LISP with extensions for PGS, PDB, PDBX, PML, and PANACEA. What this yields is an interpretive language whose primitive procedures span the functionality of all of PACT. Like the Scheme implementation which it extends, SX provides both a library and a stand alone application. The stand alone interpreter is the engine behind applications such as PDBView and PDBDiff. The SX library is the heart of TRANSL, a tool to translate data files from one database format to another. The modularization and layering make it possible to use the PACT components like building blocks. In addition, SX contains functionality which is the generalization of that found in ULTRA II. This means that as the development of SX proceeds, an SX driven application will be able to,perform arbitrary dimensional presentation, analysis, and manipulation tasks. Because of the fundamental unity of these two PACT parts, they are documented in a single manual. The first part will cover the standard Scheme functionality and the second part will discuss the SX extensions.« less
SX User`s Manual for SX version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.; Braddy, D.
1993-01-04
Scheme is a lexically scoped, properly tail recursive dialect of the LISP programming language. The PACT implementation is described abstractly in Abelson and Sussman`s book, Structure and Interpretation of Computer Programs. It features all of the ``essential procedures`` described in the ``Revised Report on Scheme`` which defines the standard for Scheme. In PACT, Scheme is implemented as a library; however, a small driver delivers a stand alone Scheme interpreter. The PACT implementation features a reference counting incremental garbage collector. This distributes the overhead of memory management throughout the running of Scheme code. It also tends to keep Scheme from tryingmore » to grab the entire machine on which it is running which some garbage collection schemes will attempt to do. SX is perhaps the ultimate PACT statement. It is simply Scheme plus the other parts of PACT. A more precise way to describe it is as a dialect of LISP with extensions for PGS, PDB, PDBX, PML, and PANACEA. What this yields is an interpretive language whose primitive procedures span the functionality of all of PACT. Like the Scheme implementation which it extends, SX provides both a library and a stand alone application. The stand alone interpreter is the engine behind applications such as PDBView and PDBDiff. The SX library is the heart of TRANSL, a tool to translate data files from one database format to another. The modularization and layering make it possible to use the PACT components like building blocks. In addition, SX contains functionality which is the generalization of that found in ULTRA II. This means that as the development of SX proceeds, an SX driven application will be able to,perform arbitrary dimensional presentation, analysis, and manipulation tasks. Because of the fundamental unity of these two PACT parts, they are documented in a single manual. The first part will cover the standard Scheme functionality and the second part will discuss the SX extensions.« less
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation
NASA Astrophysics Data System (ADS)
Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.
2010-01-01
To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
Xu, Wei; Morishita, Wade; Buckmaster, Paul S; Pang, Zhiping P; Malenka, Robert C; Südhof, Thomas C
2012-03-08
Neurons encode information by firing spikes in isolation or bursts and propagate information by spike-triggered neurotransmitter release that initiates synaptic transmission. Isolated spikes trigger neurotransmitter release unreliably but with high temporal precision. In contrast, bursts of spikes trigger neurotransmission reliably (i.e., boost transmission fidelity), but the resulting synaptic responses are temporally imprecise. However, the relative physiological importance of different spike-firing modes remains unclear. Here, we show that knockdown of synaptotagmin-1, the major Ca(2+) sensor for neurotransmitter release, abrogated neurotransmission evoked by isolated spikes but only delayed, without abolishing, neurotransmission evoked by bursts of spikes. Nevertheless, knockdown of synaptotagmin-1 in the hippocampal CA1 region did not impede acquisition of recent contextual fear memories, although it did impair the precision of such memories. In contrast, knockdown of synaptotagmin-1 in the prefrontal cortex impaired all remote fear memories. These results indicate that different brain circuits and types of memory employ distinct spike-coding schemes to encode and transmit information. Copyright © 2012 Elsevier Inc. All rights reserved.
Ising formulation of associative memory models and quantum annealing recall
NASA Astrophysics Data System (ADS)
Santra, Siddhartha; Shehab, Omar; Balu, Radhakrishnan
2017-12-01
Associative memory models, in theoretical neuro- and computer sciences, can generally store at most a linear number of memories. Recalling memories in these models can be understood as retrieval of the energy minimizing configuration of classical Ising spins, closest in Hamming distance to an imperfect input memory, where the energy landscape is determined by the set of stored memories. We present an Ising formulation for associative memory models and consider the problem of memory recall using quantum annealing. We show that allowing for input-dependent energy landscapes allows storage of up to an exponential number of memories (in terms of the number of neurons). Further, we show how quantum annealing may naturally be used for recall tasks in such input-dependent energy landscapes, although the recall time may increase with the number of stored memories. Theoretically, we obtain the radius of attractor basins R (N ) and the capacity C (N ) of such a scheme and their tradeoffs. Our calculations establish that for randomly chosen memories the capacity of our model using the Hebbian learning rule as a function of problem size can be expressed as C (N ) =O (eC1N) , C1≥0 , and succeeds on randomly chosen memory sets with a probability of (1 -e-C2N) , C2≥0 with C1+C2=(0.5-f ) 2/(1 -f ) , where f =R (N )/N , 0 ≤f ≤0.5 , is the radius of attraction in terms of the Hamming distance of an input probe from a stored memory as a fraction of the problem size. We demonstrate the application of this scheme on a programmable quantum annealing device, the D-wave processor.
Electronic implementation of associative memory based on neural network models
NASA Technical Reports Server (NTRS)
Moopenn, A.; Lambe, John; Thakoor, A. P.
1987-01-01
An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.
Data traffic reduction schemes for sparse Cholesky factorizations
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1988-01-01
Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.
Strategies for Energy Efficient Resource Management of Hybrid Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong; Supinski, Bronis de; Schulz, Martin
2013-01-01
Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less
Lai, Ying-Chih; Hsu, Fang-Chi; Chen, Jian-Yu; He, Jr-Hau; Chang, Ting-Chang; Hsieh, Ya-Ping; Lin, Tai-Yuan; Yang, Ying-Jay; Chen, Yang-Fang
2013-05-21
A newly designed transferable and flexible label-like organic memory based on a graphene electrode behaves like a sticker, and can be readily placed on desired substrates or devices for diversified purposes. The memory label reveals excellent performance despite its physical presentation. This may greatly extend the memory applications in various advanced electronics and provide a simple scheme to integrate with other electronics. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An authenticated image encryption scheme based on chaotic maps and memory cellular automata
NASA Astrophysics Data System (ADS)
Bakhshandeh, Atieh; Eslami, Ziba
2013-06-01
This paper introduces a new image encryption scheme based on chaotic maps, cellular automata and permutation-diffusion architecture. In the permutation phase, a piecewise linear chaotic map is utilized to confuse the plain-image and in the diffusion phase, we employ the Logistic map as well as a reversible memory cellular automata to obtain an efficient and secure cryptosystem. The proposed method admits advantages such as highly secure diffusion mechanism, computational efficiency and ease of implementation. A novel property of the proposed scheme is its authentication ability which can detect whether the image is tampered during the transmission or not. This is particularly important in applications where image data or part of it contains highly sensitive information. Results of various analyses manifest high security of this new method and its capability for practical image encryption.
A 64Cycles/MB, Luma-Chroma Parallelized H.264/AVC Deblocking Filter for 4K × 2K Applications
NASA Astrophysics Data System (ADS)
Shen, Weiwei; Fan, Yibo; Zeng, Xiaoyang
In this paper, a high-throughput debloking filter is presented for H.264/AVC standard, catering video applications with 4K × 2K (4096 × 2304) ultra-definition resolution. In order to strengthen the parallelism without simply increasing the area, we propose a luma-chroma parallel method. Meanwhile, this work reduces the number of processing cycles, the amount of external memory traffic and the working frequency, by using triple four-stage pipeline filters and a luma-chroma interlaced sequence. Furthermore, it eliminates most unnecessary off-chip memory bandwidth with a highly reusable memory scheme, and adopts a “slide window” buffer scheme. As a result, our design can support 4K × 2K at 30fps applications at the working frequency of only 70.8MHz.
Generating Data Flow Programs from Nonprocedural Specifications.
1983-03-01
With the I-structures, Gajski points out, it is difficult to know ahead of time the optimal memory allocation scheme to pertition large arrays. amory...contention problems may occur for frequently accessed elements stored in the sam memory module. Gajski observes that these are the same problem which
Towards a Low-Cost Remote Memory Attestation for the Smart Grid
Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing
2015-01-01
In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes. PMID:26307998
Towards a Low-Cost Remote Memory Attestation for the Smart Grid.
Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing
2015-08-21
In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes.
Design and DSP implementation of star image acquisition and star point fast acquiring and tracking
NASA Astrophysics Data System (ADS)
Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang
2006-02-01
Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.
Alyahya, Mohammad
2012-02-01
Organizational structure is built through dynamic processes which blend historical force and management decisions, as a part of a broader process of constructing organizational memory (OM). OM is considered to be one of the main competences leading to the organization's success. This study focuses on the impact of the Quality and Outcome Framework (QOF), which is a Pay-for-Performance scheme, on general practitioner (GP) practices in the UK. The study is based on semistructured interviews with four GP practices in the north of England involving 39 informants. The findings show that the way practices assigned different functions into specialized units, divisions or departments shows the degree of specialization in their organizational structures. More specialized unit arrangements, such as an IT division, particular chronic disease clinics or competence-based job distributions enhanced procedural memory development through enabling regular use of knowledge in specific context, which led to competence building. In turn, such competence at particular functions or jobs made it possible for the practices to achieve their goals more efficiently. This study concludes that organizational structure contributed strongly to the enhancement of OM, which in turn led to better organizational competence.
Asynchronous Communication Scheme For Hypercube Computer
NASA Technical Reports Server (NTRS)
Madan, Herb S.
1988-01-01
Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.
A study of an arbiter function in the structures of a shared bus
NASA Astrophysics Data System (ADS)
Seck, J.-P.
The results of a comparative study of synchronous and asynchronous arbiters for managing user access to a shared bus is presented. The best available method is determined to be modular arbiter structures attached only to the decision module. Linear and circular arbitration strategies are examined for suitability for automatic decision-making. A multiple strategies arbiter scheme is devised, involving the superposition of various strategies of one sequential machine into another. It is then possible to modify the strategy on-line if the current strategy is ineffective. The utilization of a multiple structure of cascading arbiter devices is noted to be effective if response time is not a critical matter. Finally, attention is given to automatic circuit testing and fault detection. An example is furnished in terms of a management system for a shared memory in a multimicroprocessor structure.
Study on fault-tolerant processors for advanced launch system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1990-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Power impact of loop buffer schemes for biomedical wireless sensor nodes.
Artes, Antonio; Ayala, Jose L; Catthoor, Francky
2012-11-06
Instruction memory organisations are pointed out as one of the major sources of energy consumption in embedded systems. As these systems are characterised by restrictive resources and a low-energy budget, any enhancement in this component allows not only to decrease the energy consumption but also to have a better distribution of the energy budget throughout the system. Loop buffering is an effective scheme to reduce energy consumption in instruction memory organisations. In this paper, the loop buffer concept is applied in real-life embedded applications that are widely used in biomedical Wireless Sensor Nodes, to show which scheme of loop buffer is more suitable for applications with certain behaviour. Post-layout simulations demonstrate that a trade-off exists between the complexity of the loop buffer architecture and the energy savings of utilising it. Therefore, the use of loop buffer architectures in order to optimise the instruction memory organisation from the energy efficiency point of view should be evaluated carefully, taking into account two factors: (1) the percentage of the execution time of the application that is related to the execution of the loops, and (2) the distribution of the execution time percentage over each one of the loops that form the application.
Design and Analysis of a Dynamic Mobility Management Scheme for Wireless Mesh Network
Roy, Sudipta
2013-01-01
Seamless mobility management of the mesh clients (MCs) in wireless mesh network (WMN) has drawn a lot of attention from the research community. A number of mobility management schemes such as mesh network with mobility management (MEMO), mesh mobility management (M3), and wireless mesh mobility management (WMM) have been proposed. The common problem with these schemes is that they impose uniform criteria on all the MCs for sending route update message irrespective of their distinct characteristics. This paper proposes a session-to-mobility ratio (SMR) based dynamic mobility management scheme for handling both internet and intranet traffic. To reduce the total communication cost, this scheme considers each MC's session and mobility characteristics by dynamically determining optimal threshold SMR value for each MC. A numerical analysis of the proposed scheme has been carried out. Comparison with other schemes shows that the proposed scheme outperforms MEMO, M3, and WMM with respect to total cost. PMID:24311982
A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system
NASA Astrophysics Data System (ADS)
Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok
2012-02-01
We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.
NASA Astrophysics Data System (ADS)
Nji, Jones; Li, Guoqiang
2012-02-01
The purpose of this study is to investigate the potential of a shape-memory-polymer (SMP)-based particulate composite to heal structural-length scale damage with small thermoplastic additive contents through a close-then-heal (CTH) self-healing scheme that was introduced in a previous study (Li and Uppu 2010 Comput. Sci. Technol. 70 1419-27). The idea is to achieve reasonable healing efficiencies with minimal sacrifice in structural load capacity. By first closing cracks, the gap between two crack surfaces is narrowed and a lesser amount of thermoplastic particles is required to achieve healing. The particulate composite was fabricated by dispersing copolyester thermoplastic particles in a shape memory polymer matrix. It is found that, for small thermoplastic contents of less than 10%, the CTH scheme followed in this study heals structural-length scale damage in the SMP particulate composite to a meaningful extent and with less sacrifice of structural capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Luning; Neuscamman, Eric
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
Background Noise Analysis in a Few-Photon-Level Qubit Memory
NASA Astrophysics Data System (ADS)
Mittiga, Thomas; Kupchak, Connor; Jordaan, Bertus; Namazi, Mehdi; Nolleke, Christian; Figeroa, Eden
2014-05-01
We have developed an Electromagnetically Induced Transparency based polarization qubit memory. The device is composed of a dual-rail probe field polarization setup colinear with an intense control field to store and retrieve any arbitrary polarization state by addressing a Λ-type energy level scheme in a 87Rb vapor cell. To achieve a signal-to-background ratio at the few photon level sufficient for polarization tomography of the retrieved state, the intense control field is filtered out through an etalon filtrating system. We have developed an analytical model predicting the influence of the signal-to-background ratio on the fidelities and compared it to experimental data. Experimentally measured global fidelities have been found to follow closely the theoretical prediction as signal-to-background decreases. These results suggest the plausibility of employing room temperature memories to store photonic qubits at the single photon level and for future applications in long distance quantum communication schemes.
Data traffic reduction schemes for Cholesky factorization on asynchronous multiprocessor systems
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1989-01-01
Communication requirements of Cholesky factorization of dense and sparse symmetric, positive definite matrices are analyzed. The communication requirement is characterized by the data traffic generated on multiprocessor systems with local and shared memory. Lower bound proofs are given to show that when the load is uniformly distributed the data traffic associated with factoring an n x n dense matrix using n to the alpha power (alpha less than or equal 2) processors is omega(n to the 2 + alpha/2 power). For n x n sparse matrices representing a square root of n x square root of n regular grid graph the data traffic is shown to be omega(n to the 1 + alpha/2 power), alpha less than or equal 1. Partitioning schemes that are variations of block assignment scheme are described and it is shown that the data traffic generated by these schemes are asymptotically optimal. The schemes allow efficient use of up to O(n to the 2nd power) processors in the dense case and up to O(n) processors in the sparse case before the total data traffic reaches the maximum value of O(n to the 3rd power) and O(n to the 3/2 power), respectively. It is shown that the block based partitioning schemes allow a better utilization of the data accessed from shared memory and thus reduce the data traffic than those based on column-wise wrap around assignment schemes.
Programmable fuzzy associative memory processor
NASA Astrophysics Data System (ADS)
Shao, Lan; Liu, Liren; Li, Guoqiang
1996-02-01
An optical system based on the method of spatial area-coding and multiple image scheme is proposed for fuzzy associative memory processing. Fuzzy maximum operation is accomplished by a ferroelectric liquid crystal PROM instead of a computer-based approach. A relative subsethood is introduced here to be used as a criterion for the recall evaluation.
Acoustic-emissive memory effect in coal samples under triaxial axial-symmetric compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shkuratnik, V.L.; Filimonov, Y.L.; Kuchurin, S.V.
2006-05-15
The experimental data are presented for production and manifestation of the Kaiser effect in coal samples subjected to triaxial loading by the Karman scheme in the first cycle and to various loading modes in the second cycle. The Kaiser effect is identified with the help of a deformation memory effect.
Holographic Compact Disk Read-Only Memories
NASA Technical Reports Server (NTRS)
Liu, Tsuen-Hsi
1996-01-01
Compact disk read-only memories (CD-ROMs) of proposed type store digital data in volume holograms instead of in surface differentially reflective elements. Holographic CD-ROM consist largely of parts similar to those used in conventional CD-ROMs. However, achieves 10 or more times data-storage capacity and throughput by use of wavelength-multiplexing/volume-hologram scheme.
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
Criterion for correct recalls in associative-memory neural networks
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1992-12-01
A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.
Efficient checkpointing schemes for depletion perturbation solutions on memory-limited architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stripling, H. F.; Adams, M. L.; Hawkins, W. D.
2013-07-01
We describe a methodology for decreasing the memory footprint and machine I/O load associated with the need to access a forward solution during an adjoint solve. Specifically, we are interested in the depletion perturbation equations, where terms in the adjoint Bateman and transport equations depend on the forward flux solution. Checkpointing is the procedure of storing snapshots of the forward solution to disk and using these snapshots to recompute the parts of the forward solution that are necessary for the adjoint solve. For large problems, however, the storage cost of just a few copies of an angular flux vector canmore » exceed the available RAM on the host machine. We propose a methodology that does not checkpoint the angular flux vector; instead, we write and store converged source moments, which are typically of a much lower dimension than the angular flux solution. This reduces the memory footprint and I/O load of the problem, but requires that we perform single sweeps to reconstruct flux vectors on demand. We argue that this trade-off is exactly the kind of algorithm that will scale on advanced, memory-limited architectures. We analyze the cost, in terms of FLOPS and memory footprint, of five checkpointing schemes. We also provide computational results that support the analysis and show that the memory-for-work trade off does improve time to solution. (authors)« less
From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.
Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry
2015-07-10
Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.
Compact continuous-variable entanglement distillation.
Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A
2012-02-10
We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
Reducing Interprocessor Dependence in Recoverable Distributed Shared Memory
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1994-01-01
Checkpointing techniques in parallel systems use dependency tracking and/or message logging to ensure that a system rolls back to a consistent state. Traditional dependency tracking in distributed shared memory (DSM) systems is expensive because of high communication frequency. In this paper we show that, if designed correctly, a DSM system only needs to consider dependencies due to the transfer of blocks of data, resulting in reduced dependency tracking overhead and reduced potential for rollback propagation. We develop an ownership timestamp scheme to tolerate the loss of block state information and develop a passive server model of execution where interactions between processors are considered atomic. With our scheme, dependencies are significantly reduced compared to the traditional message-passing model.
Efficient multiuser quantum cryptography network based on entanglement.
Xue, Peng; Wang, Kunkun; Wang, Xiaoping
2017-04-04
We present an efficient quantum key distribution protocol with a certain entangled state to solve a special cryptographic task. Also, we provide a proof of security of this protocol by generalizing the proof of modified of Lo-Chau scheme. Based on this two-user scheme, a quantum cryptography network protocol is proposed without any quantum memory.
Efficient multiuser quantum cryptography network based on entanglement
Xue, Peng; Wang, Kunkun; Wang, Xiaoping
2017-01-01
We present an efficient quantum key distribution protocol with a certain entangled state to solve a special cryptographic task. Also, we provide a proof of security of this protocol by generalizing the proof of modified of Lo-Chau scheme. Based on this two-user scheme, a quantum cryptography network protocol is proposed without any quantum memory. PMID:28374854
Efficient multiuser quantum cryptography network based on entanglement
NASA Astrophysics Data System (ADS)
Xue, Peng; Wang, Kunkun; Wang, Xiaoping
2017-04-01
We present an efficient quantum key distribution protocol with a certain entangled state to solve a special cryptographic task. Also, we provide a proof of security of this protocol by generalizing the proof of modified of Lo-Chau scheme. Based on this two-user scheme, a quantum cryptography network protocol is proposed without any quantum memory.
Nonlinear Fluid Computations in a Distributed Environment
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.; Smith, Merritt H.
1995-01-01
The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.
Flash memory management system and method utilizing multiple block list windows
NASA Technical Reports Server (NTRS)
Chow, James (Inventor); Gender, Thomas K. (Inventor)
2005-01-01
The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Power Impact of Loop Buffer Schemes for Biomedical Wireless Sensor Nodes
Artes, Antonio; Ayala, Jose L.; Catthoor, Francky
2012-01-01
Instruction memory organisations are pointed out as one of the major sources of energy consumption in embedded systems. As these systems are characterised by restrictive resources and a low-energy budget, any enhancement in this component allows not only to decrease the energy consumption but also to have a better distribution of the energy budget throughout the system. Loop buffering is an effective scheme to reduce energy consumption in instruction memory organisations. In this paper, the loop buffer concept is applied in real-life embedded applications that are widely used in biomedical Wireless Sensor Nodes, to show which scheme of loop buffer is more suitable for applications with certain behaviour. Post-layout simulations demonstrate that a trade-off exists between the complexity of the loop buffer architecture and the energy savings of utilising it. Therefore, the use of loop buffer architectures in order to optimise the instruction memory organisation from the energy efficiency point of view should be evaluated carefully, taking into account two factors: (1) the percentage of the execution time of the application that is related to the execution of the loops, and (2) the distribution of the execution time percentage over each one of the loops that form the application. PMID:23202202
On ways to overcome the magical capacity limit of working memory.
Turi, Zsolt; Alekseichuk, Ivan; Paulus, Walter
2018-04-01
The ability to simultaneously process and maintain multiple pieces of information is limited. Over the past 50 years, observational methods have provided a large amount of insight regarding the neural mechanisms that underpin the mental capacity that we refer to as "working memory." More than 20 years ago, a neural coding scheme was proposed for working memory. As a result of technological developments, we can now not only observe but can also influence brain rhythms in humans. Building on these novel developments, we have begun to externally control brain oscillations in order to extend the limits of working memory.
Multimodal properties and dynamics of gradient echo quantum memory.
Hétet, G; Longdell, J J; Sellars, M J; Lam, P K; Buchler, B C
2008-11-14
We investigate the properties of a recently proposed gradient echo memory (GEM) scheme for information mapping between optical and atomic systems. We show that GEM can be described by the dynamic formation of polaritons in k space. This picture highlights the flexibility and robustness with regards to the external control of the storage process. Our results also show that, as GEM is a frequency-encoding memory, it can accurately preserve the shape of signals that have large time-bandwidth products, even at moderate optical depths. At higher optical depths, we show that GEM is a high fidelity multimode quantum memory.
NASA Astrophysics Data System (ADS)
Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola
2017-04-01
In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.
Nonvolatile memory with Co-SiO2 core-shell nanocrystals as charge storage nodes in floating gate
NASA Astrophysics Data System (ADS)
Liu, Hai; Ferrer, Domingo A.; Ferdousi, Fahmida; Banerjee, Sanjay K.
2009-11-01
In this letter, we reported nanocrystal floating gate memory with Co-SiO2 core-shell nanocrystal charge storage nodes. By using a water-in-oil microemulsion scheme, Co-SiO2 core-shell nanocrystals were synthesized and closely packed to achieve high density matrix in the floating gate without aggregation. The insulator shell also can help to increase the thermal stability of the nanocrystal metal core during the fabrication process to improve memory performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
NASA Astrophysics Data System (ADS)
Li, Will X. Y.; Cui, Ke; Zhang, Wei
2017-04-01
Cognitive neural prosthesis is a manmade device which can be used to restore or compensate for lost human cognitive modalities. The generalized Laguerre-Volterra (GLV) network serves as a robust mathematical underpinning for the development of such prosthetic instrument. In this paper, a hardware implementation scheme of Gauss error function for the GLV network targeting reconfigurable platforms is reported. Numerical approximations are formulated which transform the computation of nonelementary function into combinational operations of elementary functions, and memory-intensive look-up table (LUT) based approaches can therefore be circumvented. The computational precision can be made adjustable with the utilization of an error compensation scheme, which is proposed based on the experimental observation of the mathematical characteristics of the error trajectory. The precision can be further customizable by exploiting the run-time characteristics of the reconfigurable system. Compared to the polynomial expansion based implementation scheme, the utilization of slice LUTs, occupied slices, and DSP48E1s on a Xilinx XC6VLX240T field-programmable gate array has decreased by 94.2%, 94.1%, and 90.0%, respectively. While compared to the look-up table based scheme, 1.0 ×1017 bits of storage can be spared under the maximum allowable error of 1.0 ×10-3 . The proposed implementation scheme can be employed in the study of large-scale neural ensemble activity and in the design and development of neural prosthetic device.
Box schemes and their implementation on the iPSC/860
NASA Technical Reports Server (NTRS)
Chattot, J. J.; Merriam, M. L.
1991-01-01
Research on algoriths for efficiently solving fluid flow problems on massively parallel computers is continued in the present paper. Attention is given to the implementation of a box scheme on the iPSC/860, a massively parallel computer with a peak speed of 10 Gflops and a memory of 128 Mwords. A domain decomposition approach to parallelism is used.
A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo
Zhao, Luning; Neuscamman, Eric
2017-05-17
We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Gradient Echo Quantum Memory in Warm Atomic Vapor
Pinel, Olivier; Hosseini, Mahdi; Sparkes, Ben M.; Everett, Jesse L.; Higginbottom, Daniel; Campbell, Geoff T.; Lam, Ping Koy; Buchler, Ben C.
2013-01-01
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain. PMID:24300586
Gradient echo quantum memory in warm atomic vapor.
Pinel, Olivier; Hosseini, Mahdi; Sparkes, Ben M; Everett, Jesse L; Higginbottom, Daniel; Campbell, Geoff T; Lam, Ping Koy; Buchler, Ben C
2013-11-11
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain.
NASA Astrophysics Data System (ADS)
Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha
2010-11-01
Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.
NASA Technical Reports Server (NTRS)
Zhang, Jun; Ge, Lixin; Kouatchou, Jules
2000-01-01
A new fourth order compact difference scheme for the three dimensional convection diffusion equation with variable coefficients is presented. The novelty of this new difference scheme is that it Only requires 15 grid points and that it can be decoupled with two colors. The entire computational grid can be updated in two parallel subsweeps with the Gauss-Seidel type iterative method. This is compared with the known 19 point fourth order compact differenCe scheme which requires four colors to decouple the computational grid. Numerical results, with multigrid methods implemented on a shared memory parallel computer, are presented to compare the 15 point and the 19 point fourth order compact schemes.
Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Yier
As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less
Reducing noise in a Raman quantum memory.
Bustard, Philip J; England, Duncan G; Heshami, Khabat; Kupchak, Connor; Sussman, Benjamin J
2016-11-01
Optical quantum memories are an important component of future optical and hybrid quantum technologies. Raman schemes are strong candidates for use with ultrashort optical pulses due to their broad bandwidth; however, the elimination of deleterious four-wave mixing noise from Raman memories is critical for practical applications. Here, we demonstrate a quantum memory using the rotational states of hydrogen molecules at room temperature. Polarization selection rules prohibit four-wave mixing, allowing the storage and retrieval of attenuated coherent states with a mean photon number 0.9 and a pulse duration 175 fs. The 1/e memory lifetime is 85.5 ps, demonstrating a time-bandwidth product of ≈480 in a memory that is well suited for use with broadband heralded down-conversion and fiber-based photon sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sancho Pitarch, Jose Carlos; Kerbyson, Darren; Lang, Mike
Increasing the core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend to cope with this challenge is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the effectiveness of this approach on the performance of parallel scientific applications. Specifically, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers. Experiments conducted on two current state-of-the-art multicore processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, for amore » wide range of production applications shows that there is a diminishing return when increasing the number of memory channels per memory controller. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers with each with one channel compared with one controller with two memory channels.« less
Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications
NASA Astrophysics Data System (ADS)
Francés, J.; Otero, B.; Bleda, S.; Gallego, S.; Neipp, C.; Márquez, A.; Beléndez, A.
2015-06-01
The Finite-Difference Time-Domain (FDTD) method is applied to the analysis of vibroacoustic problems and to study the propagation of longitudinal and transversal waves in a stratified media. The potential of the scheme and the relevance of each acceleration strategy for massively computations in FDTD are demonstrated in this work. In this paper, we propose two new specific implementations of the bi-dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first implementation, an open source message passing interface (OMPI) has been included in order to massively exploit the resources of a biprocessor station with two Intel Xeon processors. Moreover, regarding CPU code version, the streaming SIMD extensions (SSE) and also the advanced vectorial extensions (AVX) have been included with shared memory approaches that take advantage of the multi-core platforms. On the other hand, the second implementation called the multi-GPU code version is based on Peer-to-Peer communications available in CUDA on two GPUs (NVIDIA GTX 670). Subsequently, this paper presents an accurate analysis of the influence of the different code versions including shared memory approaches, vector instructions and multi-processors (both CPU and GPU) and compares them in order to delimit the degree of improvement of using distributed solutions based on multi-CPU and multi-GPU. The performance of both approaches was analysed and it has been demonstrated that the addition of shared memory schemes to CPU computing improves substantially the performance of vector instructions enlarging the simulation sizes that use efficiently the cache memory of CPUs. In this case GPU computing is slightly twice times faster than the fine tuned CPU version in both cases one and two nodes. However, for massively computations explicit vector instructions do not worth it since the memory bandwidth is the limiting factor and the performance tends to be the same than the sequential version with auto-vectorisation and also shared memory approach. In this scenario GPU computing is the best option since it provides a homogeneous behaviour. More specifically, the speedup of GPU computing achieves an upper limit of 12 for both one and two GPUs, whereas the performance reaches peak values of 80 GFlops and 146 GFlops for the performance for one GPU and two GPUs respectively. Finally, the method is applied to an earth crust profile in order to demonstrate the potential of our approach and the necessity of applying acceleration strategies in these type of applications.
Complementary-encoding holographic associative memory using a photorefractive crystal
NASA Astrophysics Data System (ADS)
Yuan, ShiFu; Wu, Minxian; Yan, Yingbai; Jin, Guofan
1996-06-01
We present a holographic implementation of accurate associative memory with only one holographic memory system. In the implementation, the stored and test images are coded by using complementary-encoding method. The recalled complete image is also a coded image that can be decoded with a decoding mask to get an original image or its complement image. The experiment shows that the complementary encoding can efficiently increase the addressing accuracy in a simple way. Instead of the above complementary-encoding method, a scheme that uses complementary area-encoding method is also proposed for the holographic implementation of gray-level image associative memory with accurate addressing.
Exploring the Use of Discrete Gestures for Authentication
NASA Astrophysics Data System (ADS)
Chong, Ming Ki; Marsden, Gary
Research in user authentication has been a growing field in HCI. Previous studies have shown that peoples’ graphical memory can be used to increase password memorability. On the other hand, with the increasing number of devices with built-in motion sensors, kinesthetic memory (or muscle memory) can also be exploited for authentication. This paper presents a novel knowledge-based authentication scheme, called gesture password, which uses discrete gestures as password elements. The research presents a study of multiple password retention using PINs and gesture passwords. The study reports that although participants could use kinesthetic memory to remember gesture passwords, retention of PINs is far superior to retention of gesture passwords.
Binary synaptic connections based on memory switching in a-Si:H for artificial neural networks
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Lamb, J. L.; Moopenn, A.; Khanna, S. K.
1987-01-01
A scheme for nonvolatile associative electronic memory storage with high information storage density is proposed which is based on neural network models and which uses a matrix of two-terminal passive interconnections (synapses). It is noted that the massive parallelism in the architecture would require the ON state of a synaptic connection to be unusually weak (highly resistive). Memory switching using a-Si:H along with ballast resistors patterned from amorphous Ge-metal alloys is investigated for a binary programmable read only memory matrix. The fabrication of a 1600 synapse test array of uniform connection strengths and a-Si:H switching elements is discussed.
Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon
2014-01-01
One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.
Updating schematic emotional facial expressions in working memory: Response bias and sensitivity.
Tamm, Gerly; Kreegipuu, Kairi; Harro, Jaanus; Cowan, Nelson
2017-01-01
It is unclear if positive, negative, or neutral emotional expressions have an advantage in short-term recognition. Moreover, it is unclear from previous studies of working memory for emotional faces whether effects of emotions comprise response bias or sensitivity. The aim of this study was to compare how schematic emotional expressions (sad, angry, scheming, happy, and neutral) are discriminated and recognized in an updating task (2-back recognition) in a representative sample of birth cohort of young adults. Schematic facial expressions allow control of identity processing, which is separate from expression processing, and have been used extensively in attention research but not much, until now, in working memory research. We found that expressions with a U-curved mouth (i.e., upwardly curved), namely happy and scheming expressions, favoured a bias towards recognition (i.e., towards indicating that the probe and the stimulus in working memory are the same). Other effects of emotional expression were considerably smaller (1-2% of the variance explained)) compared to a large proportion of variance that was explained by the physical similarity of items being compared. We suggest that the nature of the stimuli plays a role in this. The present application of signal detection methodology with emotional, schematic faces in a working memory procedure requiring fast comparisons helps to resolve important contradictions that have emerged in the emotional perception literature. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich
2015-01-01
Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.
A general purpose subroutine for fast fourier transform on a distributed memory parallel machine
NASA Technical Reports Server (NTRS)
Dubey, A.; Zubair, M.; Grosch, C. E.
1992-01-01
One issue which is central in developing a general purpose Fast Fourier Transform (FFT) subroutine on a distributed memory parallel machine is the data distribution. It is possible that different users would like to use the FFT routine with different data distributions. Thus, there is a need to design FFT schemes on distributed memory parallel machines which can support a variety of data distributions. An FFT implementation on a distributed memory parallel machine which works for a number of data distributions commonly encountered in scientific applications is presented. The problem of rearranging the data after computing the FFT is also addressed. The performance of the implementation on a distributed memory parallel machine Intel iPSC/860 is evaluated.
Goose management schemes to resolve conflicts with agriculture: Theory, practice and effects.
Eythórsson, Einar; Tombre, Ingunn M; Madsen, Jesper
2017-03-01
In 2012, the four countries hosting the Svalbard population of pink-footed goose Anser brachyrhynchus along its flyway launched an International Species Management Plan for the population. One of the aims was to reduce conflicts between geese and agriculture to an acceptable level. Since 2006, Norway has offered subsidies to farmers that provide refuge areas for geese on their land. We evaluate the mid-Norwegian goose management subsidy scheme, with a view to its adjustment to prevailing ecological and socio-economic parameters. The analysis indicates that the legitimacy of the scheme is highly dependent on transparency of knowledge management and accountability of management scheme to the farming community. Among farmers, as well as front-line officials, outcomes of prioritisation processes within the scheme are judged unfair when there is an evident mismatch between payments and genuine damage. We suggest how the scheme can be made more fair and responsive to ecological changes, within a framework of adaptive management.
Novel memory architecture for video signal processor
NASA Astrophysics Data System (ADS)
Hung, Jen-Sheng; Lin, Chia-Hsing; Jen, Chein-Wei
1993-11-01
An on-chip memory architecture for video signal processor (VSP) is proposed. This memory structure is a two-level design for the different data locality in video applications. The upper level--Memory A provides enough storage capacity to reduce the impact on the limitation of chip I/O bandwidth, and the lower level--Memory B provides enough data parallelism and flexibility to meet the requirements of multiple reconfigurable pipeline function units in a single VSP chip. The needed memory size is decided by the memory usage analysis for video algorithms and the number of function units. Both levels of memory adopted a dual-port memory scheme to sustain the simultaneous read and write operations. Especially, Memory B uses multiple one-read-one-write memory banks to emulate the real multiport memory. Therefore, one can change the configuration of Memory B to several sets of memories with variable read/write ports by adjusting the bus switches. Then the numbers of read ports and write ports in proposed memory can meet requirement of data flow patterns in different video coding algorithms. We have finished the design of a prototype memory design using 1.2- micrometers SPDM SRAM technology and will fabricated it through TSMC, in Taiwan.
Large efficiency at telecom wavelength for optical quantum memories.
Dajczgewand, Julián; Le Gouët, Jean-Louis; Louchet-Chauvet, Anne; Chanelière, Thierry
2014-05-01
We implement the ROSE protocol in an erbium-doped solid, compatible with the telecom range. The ROSE scheme is an adaptation of the standard two-pulse photon echo to make it suitable for a quantum memory. We observe a retrieval efficiency of 40% for a weak laser pulse in the forward direction by using specific orientations of the light polarizations, magnetic field, and crystal axes.
Transitional circuitry for studying the properties of DNA
NASA Astrophysics Data System (ADS)
Trubochkina, N.
2018-01-01
The article is devoted to a new view of the structure of DNA as an intellectual scheme possessing the properties of logic and memory. The theory of transient circuitry, developed by the author for optimal computer circuits, revealed an amazing structural similarity between mathematical models of transition silicon elements and logic and memory circuits of solid state transient circuitry and atomic models of parts of DNA.
Decentralising Zimbabwe’s water management: The case of Guyu-Chelesa irrigation scheme
NASA Astrophysics Data System (ADS)
Tambudzai, Rashirayi; Everisto, Mapedza; Gideon, Zhou
Smallholder irrigation schemes are largely supply driven such that they exclude the beneficiaries on the management decisions and the choice of the irrigation schemes that would best suit their local needs. It is against this background that the decentralisation framework and the Dublin Principles on Integrated Water Resource Management (IWRM) emphasise the need for a participatory approach to water management. The Zimbabwean government has gone a step further in decentralising the management of irrigation schemes, that is promoting farmer managed irrigation schemes so as to ensure effective management of scarce community based land and water resources. The study set to investigate the way in which the Guyu-Chelesa irrigation scheme is managed with specific emphasis on the role of the Irrigation Management Committee (IMC), the level of accountability and the powers devolved to the IMC. Merrey’s 2008 critique of IWRM also informs this study which views irrigation as going beyond infrastructure by looking at how institutions and decision making processes play out at various levels including at the irrigation scheme level. The study was positioned on the hypothesis that ‘decentralised or autonomous irrigation management enhances the sustainability and effectiveness of irrigation schemes’. To validate or falsify the stated hypothesis, data was gathered using desk research in the form of reviewing articles, documents from within the scheme and field research in the form of questionnaire surveys, key informant interviews and field observation. The Statistical Package for Social Sciences was used to analyse data quantitatively, whilst content analysis was utilised to analyse qualitative data whereby data was analysed thematically. Comparative analysis was carried out as Guyu-Chelesa irrigation scheme was compared with other smallholder irrigation scheme’s experiences within Zimbabwe and the Sub Saharan African region at large. The findings were that whilst the scheme is a model of a decentralised entity whose importance lies at improving food security and employment creation within the community, it falls short in representing a downwardly accountable decentralised irrigation scheme. The scheme is faced with various challenges which include its operation which is below capacity utilisation, absence of specialised technical human personnel to address infrastructural breakdowns, uneven distribution of water pressure, incapacitated Irrigation Management Committee (IMC), absence of a locally legitimate constitution, compromised beneficiary participation and unclear lines of communication between various institutions involved in water management. Understanding decentralization is important since one of the key tenets of IWRM is stakeholder participation which the decentralization framework interrogates.
NASA Astrophysics Data System (ADS)
Aluguri, R.; Kumar, D.; Simanjuntak, F. M.; Tseng, T.-Y.
2017-09-01
A bipolar transistor selector was connected in series with a resistive switching memory device to study its memory characteristics for its application in cross bar array memory. The metal oxide based p-n-p bipolar transistor selector indicated good selectivity of about 104 with high retention and long endurance showing its usefulness in cross bar RRAM devices. Zener tunneling is found to be the main conduction phenomena for obtaining high selectivity. 1BT-1R device demonstrated good memory characteristics with non-linearity of 2 orders, selectivity of about 2 orders and long retention characteristics of more than 105 sec. One bit-line pull-up scheme shows that a 650 kb cross bar array made with this 1BT1R devices works well with more than 10 % read margin proving its ability in future memory technology application.
Managing Chemotherapy Side Effects: Memory Changes
... C ancer I nstitute Managing Chemotherapy Side Effects Memory Changes What is causing these changes? Your doctor ... thinking or remembering things Managing Chemotherapy Side Effects: Memory Changes Get help to remember things. Write down ...
Extended memory management under RTOS
NASA Technical Reports Server (NTRS)
Plummer, M.
1981-01-01
A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
Distributed Memory Parallel Computing with SEAWAT
NASA Astrophysics Data System (ADS)
Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.
2017-12-01
Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources. Speed-ups up to 40 were obtained with the new PKS solver.
Memory Applications Using Resonant Tunneling Diodes
NASA Astrophysics Data System (ADS)
Shieh, Ming-Huei
Resonant tunneling diodes (RTDs) producing unique folding current-voltage (I-V) characteristics have attracted considerable research attention due to their promising application in signal processing and multi-valued logic. The negative differential resistance of RTDs renders the operating points self-latching and stable. We have proposed a multiple -dimensional multiple-state RTD-based static random-access memory (SRAM) cell in which the number of stable states can significantly be increased to (N + 1)^ m or more for m number of N-peak RTDs connected in series. The proposed cells take advantage of the hysteresis and folding I-V characteristics of RTD. Several cell designs are presented and evaluated. A two-dimensional nine-state memory cell has been implemented and demonstrated by a breadboard circuit using two 2-peak RTDs. The hysteresis phenomenon in a series of RTDs is also further analyzed. The switch model provided in SPICE 3 can be utilized to simulate the hysteretic I-V characteristics of RTDs. A simple macro-circuit is described to model the hysteretic I-V characteristic of RTD for circuit simulation. A new scheme for storing word-wide multiple-bit information very efficiently in a single memory cell using RTDs is proposed. An efficient and inexpensive periphery circuit to read from and write into the cell is also described. Simulation results on the design of a 3-bit memory cell scheme using one-peak RTDs are also presented. Finally, a binary transistor-less memory cell which is only composed of a pair of RTDs and an ordinary rectifier diode is presented and investigated. A simple means for reading and writing information from or into the memory cell is also discussed.
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.
2018-04-01
High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.
Ensuring correct rollback recovery in distributed shared memory systems
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1995-01-01
Distributed shared memory (DSM) implemented on a cluster of workstations is an increasingly attractive platform for executing parallel scientific applications. Checkpointing and rollback techniques can be used in such a system to allow the computation to progress in spite of the temporary failure of one or more processing nodes. This paper presents the design of an independent checkpointing method for DSM that takes advantage of DSM's specific properties to reduce error-free and rollback overhead. The scheme reduces the dependencies that need to be considered for correct rollback to those resulting from transfers of pages. Furthermore, in-transit messages can be recovered without the use of logging. We extend the scheme to a DSM implementation using lazy release consistency, where the frequency of dependencies is further reduced.
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
Network resiliency through memory health monitoring and proactive management
Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-11-21
A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.
Distributed computing for membrane-based modeling of action potential propagation.
Porras, D; Rogers, J M; Smith, W M; Pollard, A E
2000-08-01
Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.
Log-less metadata management on metadata server for parallel file systems.
Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.
Log-Less Metadata Management on Metadata Server for Parallel File Systems
Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093
A microkernel design for component-based parallel numerical software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.
1999-01-13
What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less
NASA Astrophysics Data System (ADS)
Chang, Liang-Shun; Lin, Chrong Jung; King, Ya-Chin
2014-01-01
The temperature dependent characteristics of the random telegraphic noise (RTN) on contact resistive random access memory (CRRAM) are studied in this work. In addition to the bi-level switching, the occurrences of the middle states in the RTN signal are investigated. Based on the unique its temperature dependent characteristics, a new temperature sensing scheme is proposed for applications in ultra-low power sensor modules.
Improving the Rainbow Attack by Reusing Colours
NASA Astrophysics Data System (ADS)
Ågren, Martin; Johansson, Thomas; Hell, Martin
Hashing or encrypting a key or a password is a vital part in most network security protocols. The most practical generic attack on such schemes is a time memory trade-off attack. Such an attack inverts any one-way function using a trade-off between memory and execution time. Existing techniques include the Hellman attack and the rainbow attack, where the latter uses different reduction functions ("colours") within a table.
Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU
NASA Astrophysics Data System (ADS)
Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei
2013-09-01
The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.
Diagnostic reasoning strategies and diagnostic success.
Coderre, S; Mandin, H; Harasym, P H; Fick, G H
2003-08-01
Cognitive psychology research supports the notion that experts use mental frameworks or "schemes", both to organize knowledge in memory and to solve clinical problems. The central purpose of this study was to determine the relationship between problem-solving strategies and the likelihood of diagnostic success. Think-aloud protocols were collected to determine the diagnostic reasoning used by experts and non-experts when attempting to diagnose clinical presentations in gastroenterology. Using logistic regression analysis, the study found that there is a relationship between diagnostic reasoning strategy and the likelihood of diagnostic success. Compared to hypothetico-deductive reasoning, the odds of diagnostic success were significantly greater when subjects used the diagnostic strategies of pattern recognition and scheme-inductive reasoning. Two other factors emerged as independent determinants of diagnostic success: expertise and clinical presentation. Not surprisingly, experts outperformed novices, while the content area of the clinical cases in each of the four clinical presentations demonstrated varying degrees of difficulty and thus diagnostic success. These findings have significant implications for medical educators. It supports the introduction of "schemes" as a means of enhancing memory organization and improving diagnostic success.
Agglomeration Multigrid for an Unstructured-Grid Flow Solver
NASA Technical Reports Server (NTRS)
Frink, Neal; Pandya, Mohagna J.
2004-01-01
An agglomeration multigrid scheme has been implemented into the sequential version of the NASA code USM3Dns, tetrahedral cell-centered finite volume Euler/Navier-Stokes flow solver. Efficiency and robustness of the multigrid-enhanced flow solver have been assessed for three configurations assuming an inviscid flow and one configuration assuming a viscous fully turbulent flow. The inviscid studies include a transonic flow over the ONERA M6 wing and a generic business jet with flow-through nacelles and a low subsonic flow over a high-lift trapezoidal wing. The viscous case includes a fully turbulent flow over the RAE 2822 rectangular wing. The multigrid solutions converged with 12%-33% of the Central Processing Unit (CPU) time required by the solutions obtained without multigrid. For all of the inviscid cases, multigrid in conjunction with an explicit time-stepping scheme performed the best with regard to the run time memory and CPU time requirements. However, for the viscous case multigrid had to be used with an implicit backward Euler time-stepping scheme that increased the run time memory requirement by 22% as compared to the run made without multigrid.
Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers
NASA Technical Reports Server (NTRS)
Palermo, Daniel J.; Banerjee, Prithviraj
1995-01-01
For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.
Van Swol, Lyn M
2008-04-01
To assess performance and processes in collective and individual memory, participants watched two job candidates on video. Beforehand, half the participants were told they would be tested on their memory of the interviews, and the other half were asked to make a decision to hire one of the candidates. Afterwards, participants completed a recognition memory task in either a group or individual condition. Groups had better recognition memory than individuals. Individuals made more false positives than false negatives and groups exaggerated this. Post-hoc analysis found that groups only exaggerated the tendency towards false positives on items that reflected negatively on the job candidate. There was no significant difference between instruction conditions. When reaching consensus on the recognition task, groups tended to choose the correct answer if at least two members had the correct answer. This method of consensus is discussed as a factor in groups' superior memory performance.
Plaçais, Pierre-Yves; Trannoy, Séverine; Friedrich, Anja B; Tanimoto, Hiromu; Preat, Thomas
2013-11-14
One of the challenges facing memory research is to combine network- and cellular-level descriptions of memory encoding. In this context, Drosophila offers the opportunity to decipher, down to single-cell resolution, memory-relevant circuits in connection with the mushroom bodies (MBs), prominent structures for olfactory learning and memory. Although the MB-afferent circuits involved in appetitive learning were recently described, the circuits underlying appetitive memory retrieval remain unknown. We identified two pairs of cholinergic neurons efferent from the MB α vertical lobes, named MB-V3, that are necessary for the retrieval of appetitive long-term memory (LTM). Furthermore, LTM retrieval was correlated to an enhanced response to the rewarded odor in these neurons. Strikingly, though, silencing the MB-V3 neurons did not affect short-term memory (STM) retrieval. This finding supports a scheme of parallel appetitive STM and LTM processing. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Turning back the hands of time: autobiographical memories in dementia cued by a museum setting.
Miles, Amanda N; Fischer-Mogensen, Lise; Nielsen, Nadia H; Hermansen, Stine; Berntsen, Dorthe
2013-09-01
The current study examined the effects of cuing autobiographical memory retrieval in 12 older participants with dementia through immersion into a historically authentic environment that recreated the material and cultural context of the participants' youth. Participants conversed in either an everyday setting (control condition) or a museum setting furnished in early twentieth century style (experimental condition) while being presented with condition matched cues. Conversations were coded for memory content based on an adapted version of Levine, Svoboda, Hay, Winocur, and Moscovitch (2002) coding scheme. More autobiographical memories were recalled in the museum setting, and these memories were more elaborated, more spontaneous and included especially more internal (episodic) details compared to memories in the control condition. The findings have theoretical and practical implications by showing that the memories retrieved in the museum setting were both quantitatively and qualitatively different from memories retrieved during a control condition. Copyright © 2013 Elsevier Inc. All rights reserved.
Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Kennedy, Chistopher A.; Carpenter, Mark H.; Lewis, R. Michael
1999-01-01
The derivation of storage explicit Runge-Kutta (ERK) schemes has been performed in the context of integrating the compressible Navier-Stokes equations via direct numerical simulation. Optimization of ERK methods is done across the broad range of properties, such as stability and accuracy efficiency, linear and nonlinear stability, error control reliability, step change stability, and dissipation/dispersion accuracy, subject to varying degrees of memory economization. Following van der Houwen and Wray, 16 ERK pairs are presented using from two to five registers of memory per equation, per grid point and having accuracies from third- to fifth-order. Methods have been assessed using the differential equation testing code DETEST, and with the 1D wave equation. Two of the methods have been applied to the DNS of a compressible jet as well as methane-air and hydrogen-air flames. Derived 3(2) and 4(3) pairs are competitive with existing full-storage methods. Although a substantial efficiency penalty accompanies use of two- and three-register, fifth-order methods, the best contemporary full-storage methods can be pearl), matched while still saving two to three registers of memory.
Memory management in genome-wide association studies
2009-01-01
Genome-wide association is a powerful tool for the identification of genes that underlie common diseases. Genome-wide association studies generate billions of genotypes and pose significant computational challenges for most users including limited computer memory. We applied a recently developed memory management tool to two analyses of North American Rheumatoid Arthritis Consortium studies and measured the performance in terms of central processing unit and memory usage. We conclude that our memory management approach is simple, efficient, and effective for genome-wide association studies. PMID:20018047
Numerical study of read scheme in one-selector one-resistor crossbar array
NASA Astrophysics Data System (ADS)
Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin
2015-12-01
A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.
Raskin, Sarah A; Maye, Jacqueline; Rogers, Alexandra; Correll, David; Zamroziewicz, Marta; Kurtz, Matthew
2014-05-01
Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. In this Study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition, and symptoms. Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs.
Application of an efficient hybrid scheme for aeroelastic analysis of advanced propellers
NASA Technical Reports Server (NTRS)
Srivastava, R.; Sankar, N. L.; Reddy, T. S. R.; Huff, D. L.
1989-01-01
An efficient 3-D hybrid scheme is applied for solving Euler equations to analyze advanced propellers. The scheme treats the spanwise direction semi-explicitly and the other two directions implicitly, without affecting the accuracy, as compared to a fully implicit scheme. This leads to a reduction in computer time and memory requirement. The calculated power coefficients for two advanced propellers, SR3 and SR7L, and various advanced ratios showed good correlation with experiment. Spanwise distribution of elemental power coefficient and steady pressure coefficient differences also showed good agreement with experiment. A study of the effect of structural flexibility on the performance of the advanced propellers showed that structural deformation due to centrifugal and aero loading should be included for better correlation.
On securing wireless sensor network--novel authentication scheme against DOS attacks.
Raja, K Nirmal; Beno, M Marsaline
2014-10-01
Wireless sensor networks are generally deployed for collecting data from various environments. Several applications specific sensor network cryptography algorithms have been proposed in research. However WSN's has many constrictions, including low computation capability, less memory, limited energy resources, vulnerability to physical capture, which enforce unique security challenges needs to make a lot of improvements. This paper presents a novel security mechanism and algorithm for wireless sensor network security and also an application of this algorithm. The proposed scheme is given to strong authentication against Denial of Service Attacks (DOS). The scheme is simulated using network simulator2 (NS2). Then this scheme is analyzed based on the network packet delivery ratio and found that throughput has improved.
Student’s scheme in solving mathematics problems
NASA Astrophysics Data System (ADS)
Setyaningsih, Nining; Juniati, Dwi; Suwarsono
2018-03-01
The purpose of this study was to investigate students’ scheme in solving mathematics problems. Scheme are data structures for representing the concepts stored in memory. In this study, we used it in solving mathematics problems, especially ratio and proportion topics. Scheme is related to problem solving that assumes that a system is developed in the human mind by acquiring a structure in which problem solving procedures are integrated with some concepts. The data were collected by interview and students’ written works. The results of this study revealed are students’ scheme in solving the problem of ratio and proportion as follows: (1) the content scheme, where students can describe the selected components of the problem according to their prior knowledge, (2) the formal scheme, where students can explain in construct a mental model based on components that have been selected from the problem and can use existing schemes to build planning steps, create something that will be used to solve problems and (3) the language scheme, where students can identify terms, or symbols of the components of the problem.Therefore, by using the different strategies to solve the problems, the students’ scheme in solving the ratio and proportion problems will also differ.
Birch, G F; Gunns, T J; Chapman, D; Harrison, D
2016-05-01
As coastal populations increase, considerable pressures are exerted on estuarine environments. Recently, there has been a trend towards the development and use of estuarine assessment schemes as a decision support tool in the management of these environments. These schemes offer a method by which complex environmental data is converted into a readily understandable and communicable format for informed decision making and effective distribution of limited management resources. Reliability and effectiveness of these schemes are often limited due to a complex assessment framework, poor data management and use of ineffective environmental indicators. The current scheme aims to improve reliability in the reporting of estuarine condition by including a concise assessment framework, employing high-value indicators and, in a unique approach, employing fuzzy logic in indicator evaluation. Using Sydney estuary as a case study, each of the 15 sub-catchment/sub-estuary systems were assessed using the current scheme. Results identified that poor sediment quality was a significant issue in Blackwattle/Rozelle Bay, Iron Cove and Hen and Chicken Bay while poor water quality was of particular concern in Duck River, Homebush Bay and the Parramatta River. Overall results of the assessment scheme were used to prioritise the management of each sub-catchment/sub-estuary assessed with Blackwattle/Rozelle Bay, Homebush Bay, Iron Cove and Duck River considered to be in need of a high priority management response. A report card format, using letter grades, was employed to convey the results of the assessment in a readily understood manner to estuarine managers and members of the public. Letter grades also provide benchmarking and performance monitoring ability, allowing estuarine managers to set improvement targets and assesses the effectiveness of management strategies. The current assessment scheme provides an effective, integrated and consistent assessment of estuarine health and provides an effective decision support tool to maximise the efficient distribution of limited management resources.
Hard Real-Time: C++ Versus RTSJ
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Reinholtz, William K.
2004-01-01
In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.
Cooperative single-photon subradiant states in a three-dimensional atomic array
NASA Astrophysics Data System (ADS)
Jen, H. H.
2016-11-01
We propose a complete superradiant and subradiant states that can be manipulated and prepared in a three-dimensional atomic array. These subradiant states can be realized by absorbing a single photon and imprinting the spatially-dependent phases on the atomic system. We find that the collective decay rates and associated cooperative Lamb shifts are highly dependent on the phases we manage to imprint, and the subradiant state of long lifetime can be found for various lattice spacings and atom numbers. We also investigate both optically thin and thick atomic arrays, which can serve for systematic studies of super- and sub-radiance. Our proposal offers an alternative scheme for quantum memory of light in a three-dimensional array of two-level atoms, which is applicable and potentially advantageous in quantum information processing.
NASA Astrophysics Data System (ADS)
Li, Hao; Xie, Lunguo
2013-03-01
The design of cache system for Chip Multiprocessor (CMP) face many challenges because future CMPs will have more cores and greater on-chip cache capacity. There are two base design schemes about L2 cache: private scheme in which each L2 slice is treated as a private L2 cache and shared scheme in which all L2 slices are treated as a large L2 cache shared by all cores. Private caches provide the lowest hit latency but reduce the total effective cache capacity. A shared L2 cache increases the effective cache capacity but has long hit latencies when data is on a remote tile. This paper present a new Controlled Replication (CR) policy to reduce the capacities occupied by redundant shared replicas. the new CR policy increases the effective capacity than victim replication scheme and has lower hit latency than shared scheme. We evaluate the various schemes using full-system simulation of parallel applications. Results show that CR reduces the average memory access latency of shared scheme by an average of 13%, providing better overall performance than victim replication and shared schemes.
Hierarchically clustered adaptive quantization CMAC and its learning convergence.
Teddy, S D; Lai, E M K; Quek, C
2007-11-01
The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.
A class Hierarchical, object-oriented approach to virtual memory management
NASA Technical Reports Server (NTRS)
Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.
1989-01-01
The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.
Raskin, S.; Maye, J.; Rogers, A.; Correll, D.; Zamroziewicz, M.; Kurtz, M.
2014-01-01
Objective Impaired adherence to medication regimens is a serious concern for individuals with schizophrenia linked to relapse and poorer outcomes. One possible reason for poor adherence to medication is poor ability to remember future intentions, labeled prospective memory skills. It has been demonstrated in several studies that individuals with schizophrenia have impairments in prospective memory that are linked to everyday life skills. However, there have been no studies, to our knowledge, examining the relationship of a clinical measure of prospective memory to medication management skills, a key element of successful adherence. Methods In this study 41 individuals with schizophrenia and 25 healthy adults were administered a standardized test battery that included measures of prospective memory, medication management skills, neurocognition and symptoms. Results Individuals with schizophrenia demonstrated impairments in prospective memory (both time and event-based) relative to healthy controls. Performance on the test of prospective memory was correlated with the standardized measure of medication management in individuals with schizophrenia. Moreover, the test of prospective memory predicted skills in medication adherence even after measures of neurocognition were accounted for. Conclusions This suggests that prospective memory may play a key role in medication management skills and thus should be a target of cognitive remediation programs. PMID:24188118
Sparse grid techniques for particle-in-cell schemes
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.
2017-02-01
We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
Protecting Quantum Correlation from Correlated Amplitude Damping Channel
NASA Astrophysics Data System (ADS)
Huang, Zhiming; Zhang, Cai
2017-08-01
In this work, we investigate the dynamics of quantum correlation measured by measurement-induced nonlocality (MIN) and local quantum uncertainty (LQU) in correlated amplitude damping (CAD) channel. We find that the memory parameter brings different influences on MIN and LQU. In addition, we propose a scheme to protect quantum correlation by executing prior weak measurement (WM) and post-measurement reversal (MR). However, better protection of quantum correlation by the scheme implies a lower success probability (SP).
Long-distance quantum communication over noisy networks without long-time quantum memory
NASA Astrophysics Data System (ADS)
Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Łodyga, Justyna; Pankowski, Łukasz; PrzysieŻna, Anna
2014-12-01
The problem of sharing entanglement over large distances is crucial for implementations of quantum cryptography. A possible scheme for long-distance entanglement sharing and quantum communication exploits networks whose nodes share Einstein-Podolsky-Rosen (EPR) pairs. In Perseguers et al. [Phys. Rev. A 78, 062324 (2008), 10.1103/PhysRevA.78.062324] the authors put forward an important isomorphism between storing quantum information in a dimension D and transmission of quantum information in a D +1 -dimensional network. We show that it is possible to obtain long-distance entanglement in a noisy two-dimensional (2D) network, even when taking into account that encoding and decoding of a state is exposed to an error. For 3D networks we propose a simple encoding and decoding scheme based solely on syndrome measurements on 2D Kitaev topological quantum memory. Our procedure constitutes an alternative scheme of state injection that can be used for universal quantum computation on 2D Kitaev code. It is shown that the encoding scheme is equivalent to teleporting the state, from a specific node into a whole two-dimensional network, through some virtual EPR pair existing within the rest of network qubits. We present an analytic lower bound on fidelity of the encoding and decoding procedure, using as our main tool a modified metric on space-time lattice, deviating from a taxicab metric at the first and the last time slices.
High order parallel numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.
1992-01-01
The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.
A gas kinetic scheme for hybrid simulation of partially rarefied flows
NASA Astrophysics Data System (ADS)
Colonia, S.; Steijl, R.; Barakos, G.
2017-06-01
Approaches to predict flow fields that display rarefaction effects incur a cost in computational time and memory considerably higher than methods commonly employed for continuum flows. For this reason, to simulate flow fields where continuum and rarefied regimes coexist, hybrid techniques have been introduced. In the present work, analytically defined gas-kinetic schemes based on the Shakhov and Rykov models for monoatomic and diatomic gas flows, respectively, are proposed and evaluated with the aim to be used in the context of hybrid simulations. This should reduce the region where more expensive methods are needed by extending the validity of the continuum formulation. Moreover, since for high-speed rare¦ed gas flows it is necessary to take into account the nonequilibrium among the internal degrees of freedom, the extension of the approach to employ diatomic gas models including rotational relaxation process is a mandatory first step towards realistic simulations. Compared to previous works of Xu and coworkers, the presented scheme is de¦ned directly on the basis of kinetic models which involve a Prandtl number correction. Moreover, the methods are defined fully analytically instead of making use of Taylor expansion for the evaluation of the required derivatives. The scheme has been tested for various test cases and Mach numbers proving to produce reliable predictions in agreement with other approaches for near-continuum flows. Finally, the performance of the scheme, in terms of memory and computational time, compared to discrete velocity methods makes it a compelling alternative in place of more complex methods for hybrid simulations of weakly rarefied flows.
A smart checkpointing scheme for improving the reliability of clustering routing protocols.
Min, Hong; Jung, Jinman; Kim, Bongjae; Cho, Yookun; Heo, Junyoung; Yi, Sangho; Hong, Jiman
2010-01-01
In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency.
A Smart Checkpointing Scheme for Improving the Reliability of Clustering Routing Protocols
Min, Hong; Jung, Jinman; Kim, Bongjae; Cho, Yookun; Heo, Junyoung; Yi, Sangho; Hong, Jiman
2010-01-01
In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency. PMID:22163389
The global increase of noxious bloom occurrences has increased the need for phytoplankton management schemes. Such schemes require the ability to predict phytoplankton succession. Equilibrium Resources Competition theory, which is popular for predicting succession in lake systems...
On metaheuristic "failure modes": a case study in Tabu search for job-shop scheduling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, Jean-Paul
2005-06-01
In this paper, we analyze the relationship between pool maintenance schemes, long-term memory mechanisms, and search space structure, with the goal of placing metaheuristic design on a more concrete foundation.
Gpu Implementation of a Viscous Flow Solver on Unstructured Grids
NASA Astrophysics Data System (ADS)
Xu, Tianhao; Chen, Long
2016-06-01
Graphics processing units have gained popularities in scientific computing over past several years due to their outstanding parallel computing capability. Computational fluid dynamics applications involve large amounts of calculations, therefore a latest GPU card is preferable of which the peak computing performance and memory bandwidth are much better than a contemporary high-end CPU. We herein focus on the detailed implementation of our GPU targeting Reynolds-averaged Navier-Stokes equations solver based on finite-volume method. The solver employs a vertex-centered scheme on unstructured grids for the sake of being capable of handling complex topologies. Multiple optimizations are carried out to improve the memory accessing performance and kernel utilization. Both steady and unsteady flow simulation cases are carried out using explicit Runge-Kutta scheme. The solver with GPU acceleration in this paper is demonstrated to have competitive advantages over the CPU targeting one.
NASA Astrophysics Data System (ADS)
Dentoni Litta, Eugenio; Ritzenthaler, Romain; Schram, Tom; Spessot, Alessio; O’Sullivan, Barry; Machkaoutsan, Vladimir; Fazan, Pierre; Ji, Yunhyuck; Mannaert, Geert; Lorant, Christophe; Sebaai, Farid; Thiam, Arame; Ercken, Monique; Demuynck, Steven; Horiguchi, Naoto
2018-04-01
Integration of high-k/metal gate stacks in peripheral transistors is a major candidate to ensure continued scaling of dynamic random access memory (DRAM) technology. In this paper, the CMOS integration of diffusion and gate replacement (D&GR) high-k/metal gate stacks is investigated, evaluating four different approaches for the critical patterning step of removing the N-type field effect transistor (NFET) effective work function (eWF) shifter stack from the P-type field effect transistor (PFET) area. The effect of plasma exposure during the patterning step is investigated in detail and found to have a strong impact on threshold voltage tunability. A CMOS integration scheme based on an experimental wet-compatible photoresist is developed and the fulfillment of the main device metrics [equivalent oxide thickness (EOT), eWF, gate leakage current density, on/off currents, short channel control] is demonstrated.
Experimental investigation of practical unforgeable quantum money
NASA Astrophysics Data System (ADS)
Bozzio, Mathieu; Orieux, Adeline; Trigo Vidarte, Luis; Zaquine, Isabelle; Kerenidis, Iordanis; Diamanti, Eleni
2018-01-01
Wiesner's unforgeable quantum money scheme is widely celebrated as the first quantum information application. Based on the no-cloning property of quantum mechanics, this scheme allows for the creation of credit cards used in authenticated transactions offering security guarantees impossible to achieve by classical means. However, despite its central role in quantum cryptography, its experimental implementation has remained elusive because of the lack of quantum memories and of practical verification techniques. Here, we experimentally implement a quantum money protocol relying on classical verification that rigorously satisfies the security condition for unforgeability. Our system exploits polarization encoding of weak coherent states of light and operates under conditions that ensure compatibility with state-of-the-art quantum memories. We derive working regimes for our system using a security analysis taking into account all practical imperfections. Our results constitute a major step towards a real-world realization of this milestone protocol.
NASA Astrophysics Data System (ADS)
Tu, H.-Yu.; Tasneem, Sarah
Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
NASA Astrophysics Data System (ADS)
Li, Fu-Hai; Chiu, Yung-Yueh; Lee, Yen-Hui; Chang, Ru-Wei; Yang, Bo-Jun; Sun, Wein-Town; Lee, Eric; Kuo, Chao-Wei; Shirota, Riichiro
2013-04-01
In this study, we precisely investigate the charge distribution in SiN layer by dynamic programming of channel hot hole induced hot electron injection (CHHIHE) in p-channel silicon-oxide-nitride-oxide-silicon (SONOS) memory device. In the dynamic programming scheme, gate voltage is increased as a staircase with fixed step amplitude, which can prohibits the injection of holes in SiN layer. Three-dimensional device simulation is calibrated and is compared with the measured programming characteristics. It is found, for the first time, that the hot electron injection point quickly traverses from drain to source side synchronizing to the expansion of charged area in SiN layer. As a result, the injected charges quickly spread over on the almost whole channel area uniformly during a short programming period, which will afford large tolerance against lateral trapped charge diffusion by baking.
Coherent all-optical control of ultracold atoms arrays in permanent magnetic traps.
Abdelrahman, Ahmed; Mukai, Tetsuya; Häffner, Hartmut; Byrnes, Tim
2014-02-10
We propose a hybrid architecture for quantum information processing based on magnetically trapped ultracold atoms coupled via optical fields. The ultracold atoms, which can be either Bose-Einstein condensates or ensembles, are trapped in permanent magnetic traps and are placed in microcavities, connected by silica based waveguides on an atom chip structure. At each trapping center, the ultracold atoms form spin coherent states, serving as a quantum memory. An all-optical scheme is used to initialize, measure and perform a universal set of quantum gates on the single and two spin-coherent states where entanglement can be generated addressably between spatially separated trapped ultracold atoms. This allows for universal quantum operations on the spin coherent state quantum memories. We give detailed derivations of the composite cavity system mediated by a silica waveguide as well as the control scheme. Estimates for the necessary experimental conditions for a working hybrid device are given.
A robot arm simulation with a shared memory multiprocessor machine
NASA Technical Reports Server (NTRS)
Kim, Sung-Soo; Chuang, Li-Ping
1989-01-01
A parallel processing scheme for a single chain robot arm is presented for high speed computation on a shared memory multiprocessor. A recursive formulation that is derived from a virtual work form of the d'Alembert equations of motion is utilized for robot arm dynamics. A joint drive system that consists of a motor rotor and gears is included in the arm dynamics model, in order to take into account gyroscopic effects due to the spinning of the rotor. The fine grain parallelism of mechanical and control subsystem models is exploited, based on independent computation associated with bodies, joint drive systems, and controllers. Efficiency and effectiveness of the parallel scheme are demonstrated through simulations of a telerobotic manipulator arm. Two different mechanical subsystem models, i.e., with and without gyroscopic effects, are compared, to show the trade-off between efficiency and accuracy.
Long-distance quantum key distribution with imperfect devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo Piparo, Nicoló; Razavi, Mohsen
2014-12-04
Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R{sub QKD}. The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels.more » We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols.« less
Obermann, Konrad; Chanturidze, Tata; Glazinski, Bernd; Dobberschuetz, Karin; Steinhauer, Heiko; Schmidt, Jean-Olivier
2018-02-20
Managers and administrators in charge of social protection and health financing, service purchasing and provision play a crucial role in harnessing the potential advantage of prudent organization, management and purchasing of health services, thereby supporting the attainment of Universal Health Coverage. However, very little is known about the needed quantity and quality of such staff, in particular when it comes to those institutions managing mandatory health insurance schemes and purchasing services. As many health care systems in low- and middle-income countries move towards independent institutions (both purchasers and providers) there is a clear need to have good data on staff and administrative cost in different social health protection schemes as a basis for investing in the development of a cadre of health managers and administrators for such schemes. We report on a systematic literature review of human resources in health management and administration in social protection schemes and suggest some aspects in moving research, practical applications and the policy debate forward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levine, Benjamin G., E-mail: ben.levine@temple.ed; Stone, John E., E-mail: johns@ks.uiuc.ed; Kohlmeyer, Axel, E-mail: akohlmey@temple.ed
2011-05-01
The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU's memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm aremore » presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 s per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis.« less
Stone, John E.; Kohlmeyer, Axel
2011-01-01
The calculation of radial distribution functions (RDFs) from molecular dynamics trajectory data is a common and computationally expensive analysis task. The rate limiting step in the calculation of the RDF is building a histogram of the distance between atom pairs in each trajectory frame. Here we present an implementation of this histogramming scheme for multiple graphics processing units (GPUs). The algorithm features a tiling scheme to maximize the reuse of data at the fastest levels of the GPU’s memory hierarchy and dynamic load balancing to allow high performance on heterogeneous configurations of GPUs. Several versions of the RDF algorithm are presented, utilizing the specific hardware features found on different generations of GPUs. We take advantage of larger shared memory and atomic memory operations available on state-of-the-art GPUs to accelerate the code significantly. The use of atomic memory operations allows the fast, limited-capacity on-chip memory to be used much more efficiently, resulting in a fivefold increase in performance compared to the version of the algorithm without atomic operations. The ultimate version of the algorithm running in parallel on four NVIDIA GeForce GTX 480 (Fermi) GPUs was found to be 92 times faster than a multithreaded implementation running on an Intel Xeon 5550 CPU. On this multi-GPU hardware, the RDF between two selections of 1,000,000 atoms each can be calculated in 26.9 seconds per frame. The multi-GPU RDF algorithms described here are implemented in VMD, a widely used and freely available software package for molecular dynamics visualization and analysis. PMID:21547007
A mega-analysis of memory reports from eight peer-reviewed false memory implantation studies.
Scoboria, Alan; Wade, Kimberley A; Lindsay, D Stephen; Azad, Tanjeem; Strange, Deryn; Ost, James; Hyman, Ira E
2017-02-01
Understanding that suggestive practices can promote false beliefs and false memories for childhood events is important in many settings (e.g., psychotherapeutic, medical, and legal). The generalisability of findings from memory implantation studies has been questioned due to variability in estimates across studies. Such variability is partly due to false memories having been operationalised differently across studies and to differences in memory induction techniques. We explored ways of defining false memory based on memory science and developed a reliable coding system that we applied to reports from eight published implantation studies (N = 423). Independent raters coded transcripts using seven criteria: accepting the suggestion, elaboration beyond the suggestion, imagery, coherence, emotion, memory statements, and not rejecting the suggestion. Using this scheme, 30.4% of cases were classified as false memories and another 23% were classified as having accepted the event to some degree. When the suggestion included self-relevant information, an imagination procedure, and was not accompanied by a photo depicting the event, the memory formation rate was 46.1%. Our research demonstrates a useful procedure for systematically combining data that are not amenable to meta-analysis, and provides the most valid estimate of false memory formation and associated moderating factors within the implantation literature to date.
Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei
2016-01-29
In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
Newton-like methods for Navier-Stokes solution
NASA Astrophysics Data System (ADS)
Qin, N.; Xu, X.; Richards, B. E.
1992-12-01
The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
NASA Technical Reports Server (NTRS)
Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri
1991-01-01
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.
NASA Astrophysics Data System (ADS)
Guarcello, Claudio; Solinas, Paolo; Braggio, Alessandro; Di Ventra, Massimiliano; Giazotto, Francesco
2018-01-01
We propose a superconducting thermal memory device that exploits the thermal hysteresis in a flux-controlled temperature-biased superconducting quantum-interference device (SQUID). This system reveals a flux-controllable temperature bistability, which can be used to define two well-distinguishable thermal logic states. We discuss a suitable writing-reading procedure for these memory states. The time of the memory writing operation is expected to be on the order of approximately 0.2 ns for a Nb-based SQUID in thermal contact with a phonon bath at 4.2 K. We suggest a noninvasive readout scheme for the memory states based on the measurement of the effective resonance frequency of a tank circuit inductively coupled to the SQUID. The proposed device paves the way for a practical implementation of thermal logic and computation. The advantage of this proposal is that it represents also an example of harvesting thermal energy in superconducting circuits.
A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm
Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah
2015-01-01
A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974
Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Chun-Yi
By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less
Computer memory management system
Kirk, III, Whitson John
2002-01-01
A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.
Single-exposure visual memory judgments are reflected in inferotemporal cortex
Meyer, Travis
2018-01-01
Our visual memory percepts of whether we have encountered specific objects or scenes before are hypothesized to manifest as decrements in neural responses in inferotemporal cortex (IT) with stimulus repetition. To evaluate this proposal, we recorded IT neural responses as two monkeys performed a single-exposure visual memory task designed to measure the rates of forgetting with time. We found that a weighted linear read-out of IT was a better predictor of the monkeys’ forgetting rates and reaction time patterns than a strict instantiation of the repetition suppression hypothesis, expressed as a total spike count scheme. Behavioral predictions could be attributed to visual memory signals that were reflected as repetition suppression and were intermingled with visual selectivity, but only when combined across the most sensitive neurons. PMID:29517485
Lisman, John E; Jensen, Ole
2013-03-20
Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.
Scalable quantum memory in the ultrastrong coupling regime.
Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C
2015-03-02
Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.
Scalable quantum memory in the ultrastrong coupling regime
Kyaw, T. H.; Felicetti, S.; Romero, G.; Solano, E.; Kwek, L.-C.
2015-01-01
Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances. PMID:25727251
Suga, Hiroshi; Suzuki, Hiroya; Shinomura, Yuma; Kashiwabara, Shota; Tsukagoshi, Kazuhito; Shimizu, Tetsuo; Naitoh, Yasuhisa
2016-01-01
Highly stable, nonvolatile, high-temperature memory based on resistance switching was realized using a polycrystalline platinum (Pt) nanogap. The operating temperature of the memory can be drastically increased by the presence of a sharp-edged Pt crystal facet in the nanogap. A short distance between the facet edges maintains the nanogap shape at high temperature, and the sharp shape of the nanogap densifies the electric field to maintain a stable current flow due to field migration. Even at 873 K, which is a significantly higher temperature than feasible for conventional semiconductor memory, the nonvolatility of the proposed memory allows stable ON and OFF currents, with fluctuations of less than or equal to 10%, to be maintained for longer than eight hours. An advantage of this nanogap scheme for high-temperature memory is its secure operation achieved through the assembly and disassembly of a Pt needle in a high electric field. PMID:27725705
Secure SCADA communication by using a modified key management scheme.
Rezai, Abdalhossein; Keshavarzi, Parviz; Moravej, Zahra
2013-07-01
This paper presents and evaluates a new cryptographic key management scheme which increases the efficiency and security of the Supervisory Control And Data Acquisition (SCADA) communication. In the proposed key management scheme, two key update phases are used: session key update and master key update. In the session key update phase, session keys are generated in the master station. In the master key update phase, the Elliptic Curve Diffie-Hellman (ECDH) protocol is used. The Poisson process is also used to model the Security Index (SI) and Quality of Service (QoS). Our analysis shows that the proposed key management not only supports the required speed in the MODBUS implementation but also has several advantages compared to other key management schemes for secure communication in SCADA networks. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
High-quality lossy compression: current and future trends
NASA Astrophysics Data System (ADS)
McLaughlin, Steven W.
1995-01-01
This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.
Quantum memory on a charge qubit in an optical microresonator
NASA Astrophysics Data System (ADS)
Tsukanov, A. V.
2017-10-01
A quantum-memory unit scheme on the base of a semiconductor structure with quantum dots is proposed. The unit includes a microresonator with single and double quantum dots performing frequencyconverter and charge-qubit functions, respectively. The writing process is carried out in several stages and it is controlled by optical fields of the resonator and laser. It is shown that, to achieve high writing probability, it is necessary to use high-Q resonators and to be able to suppress relaxation processes in quantum dots.
Cognitive Rehabilitation of Episodic Memory Disorders: From Theory to Practice
Ptak, Radek; der Linden, Martial Van; Schnider, Armin
2010-01-01
Memory disorders are among the most frequent and most debilitating cognitive impairments following acquired brain damage. Cognitive remediation strategies attempt to restore lost memory capacity, provide compensatory techniques or teach the use of external memory aids. Memory rehabilitation has strongly been influenced by memory theory, and the interaction between both has stimulated the development of techniques such as spaced retrieval, vanishing cues or errorless learning. These techniques partly rely on implicit memory and therefore enable even patients with dense amnesia to acquire new information. However, knowledge acquired in this way is often strongly domain-specific and inflexible. In addition, individual patients with amnesia respond differently to distinct interventions. The factors underlying these differences have not yet been identified. Behavioral management of memory failures therefore often relies on a careful description of environmental factors and measurement of associated behavioral disorders such as unawareness of memory failures. The current evidence suggests that patients with less severe disorders benefit from self-management techniques and mnemonics whereas rehabilitation of severely amnesic patients should focus on behavior management, the transmission of domain-specific knowledge through implicit memory processes and the compensation for memory deficits with memory aids. PMID:20700383
Method and apparatus for managing access to a memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik
A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operationalmore » memory layout reduces an amount of energy consumed by the processor to perform the computing job.« less
Research on multi-user encrypted search scheme in cloud environment
NASA Astrophysics Data System (ADS)
Yu, Zonghua; Lin, Sui
2017-05-01
Aiming at the existing problems of multi-user encrypted search scheme in cloud computing environment, a basic multi-user encrypted scheme is proposed firstly, and then the basic scheme is extended to an anonymous hierarchical management authority. Compared with most of the existing schemes, the scheme not only to achieve the protection of keyword information, but also to achieve the protection of user identity privacy; the same time, data owners can directly control the user query permissions, rather than the cloud server. In addition, through the use of a special query key generation rules, to achieve the hierarchical management of the user's query permissions. The safety analysis shows that the scheme is safe and that the performance analysis and experimental data show that the scheme is practicable.
An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations
NASA Technical Reports Server (NTRS)
Singh, Jatinder; Taylor, Stephen
1997-01-01
This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.
Evolutionary dynamics and the phase structure of the minority game
NASA Astrophysics Data System (ADS)
Yuan, Baosheng; Chen, Kan
2004-06-01
We show that a simple evolutionary scheme, when applied to the minority game (MG), changes the phase structure of the game. In this scheme each agent evolves individually whenever his wealth reaches the specified bankruptcy level, in contrast to the evolutionary schemes used in the previous works. We show that evolution greatly suppresses herding behavior, and it leads to better overall performance of the agents. Similar to the standard nonevolutionary MG, the dependence of the standard deviation σ on the number of agents N and the memory length m can be characterized by a universal curve. We suggest a crowd-anticrowd theory for understanding the effect of evolution in the MG.
Research on SEU hardening of heterogeneous Dual-Core SoC
NASA Astrophysics Data System (ADS)
Huang, Kun; Hu, Keliu; Deng, Jun; Zhang, Tao
2017-08-01
The implementation of Single-Event Upsets (SEU) hardening has various schemes. However, some of them require a lot of human, material and financial resources. This paper proposes an easy scheme on SEU hardening for Heterogeneous Dual-core SoC (HD SoC) which contains three techniques. First, the automatic Triple Modular Redundancy (TMR) technique is adopted to harden the register heaps of the processor and the instruction-fetching module. Second, Hamming codes are used to harden the random access memory (RAM). Last, a software signature technique is applied to check the programs which are running on CPU. The scheme need not to consume additional resources, and has little influence on the performance of CPU. These technologies are very mature, easy to implement and needs low cost. According to the simulation result, the scheme can satisfy the basic demand of SEU-hardening.
The use and management of water in the Likangala Irrigation Scheme Complex in Southern Malawi
NASA Astrophysics Data System (ADS)
Mulwafu, Wapulumuka O.; Nkhoma, Bryson G.
This paper examines the uses and management of water for agriculture in Lake Chilwa catchment area in Zomba district of Southern Malawi. It focuses on the Likangala Rice Irrigation Scheme Complex situated along the Likangala River. The scheme is one of the largest government-run schemes. Established in the late 1960s by the government to meet the growing demand for rice, the scheme contributes greatly to the agricultural industry of the country. Besides, the scheme was established to ensure maximum utilization of Malawi's largest wetland, which, due to its hydromorphic soils and the littoral floodplains, does not favour the production of traditional upland seasonal crops such as maize. The scheme's overdependence on water from the Likangala River has attracted a considerable degree of academic interest in the use and management of the river to ensure that there is equity and efficiency for both productive and domestic users. The paper focuses on four main issues: the historical development of the scheme, the distribution of water to farmers, social relations, and the overall contribution of the scheme towards the social and economic development of the area and the country in general. The paper contends that the growing population of the basin and the increase in the number of formal and informal smallholder farmers, contributes greatly to the growth of competition and conflicts over water, which tends to undermine the economic potential of the scheme. Furthermore, the paper provides clearest indication of the need for a realistic and informed water management policy and strategy to solve the growing problem of social inequity without necessarily compromising the production of rice in the scheme.
Bermuda Triangle: a subsystem of the 168/E interfacing scheme used by Group B at SLAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxoby, G.J.; Levinson, L.J.; Trang, Q.H.
1979-12-01
The Bermuda Triangle system is a method of interfacing several 168/E microprocessors to a central system for control of the processors and overlaying their memories. The system is a three-way interface with I/O ports to a large buffer memory, a PDP11 Unibus and a bus to the 168/E processors. Data may be transferred bidirectionally between any two ports. Two Bermuda Triangles are used, one for the program memory and one for the data memory. The program buffer memory stores the overlay programs for the 168/E, and the data buffer memory, the incoming raw data, the data portion of the overlays,more » and the outgoing processed events. This buffering is necessary since the memories of 168/E microprocessors are small compared to the main program and the amount of data being processed. The link to the computer facility is via a Unibus to IBM channel interface. A PDP11/04 controls the data flow. 7 figures, 4 tables. (RWR)« less
Pay-for-performance in disease management: a systematic review of the literature.
de Bruin, Simone R; Baan, Caroline A; Struijs, Jeroen N
2011-10-14
Pay-for-performance (P4P) is increasingly implemented in the healthcare system to encourage improvements in healthcare quality. P4P is a payment model that rewards healthcare providers for meeting pre-established targets for delivery of healthcare services by financial incentives. Based on their performance, healthcare providers receive either additional or reduced payment. Currently, little is known about P4P schemes intending to improve delivery of chronic care through disease management. The objectives of this paper are therefore to provide an overview of P4P schemes used to stimulate delivery of chronic care through disease management and to provide insight into their effects on healthcare quality and costs. A systematic PubMed search was performed for English language papers published between 2000 and 2010 describing P4P schemes related to the implementation of disease management. Wagner's chronic care model was used to make disease management operational. Eight P4P schemes were identified, introduced in the USA (n = 6), Germany (n = 1), and Australia (n = 1). Five P4P schemes were part of a larger scheme of interventions to improve quality of care, whereas three P4P schemes were solely implemented. Most financial incentives were rewards, selective, and granted on the basis of absolute performance. More variation was found in incented entities and the basis for providing incentives. Information about motivation, certainty, size, frequency, and duration of the financial incentives was generally limited. Five studies were identified that evaluated the effects of P4P on healthcare quality. Most studies showed positive effects of P4P on healthcare quality. No studies were found that evaluated the effects of P4P on healthcare costs. The number of P4P schemes to encourage disease management is limited. Hardly any information is available about the effects of such schemes on healthcare quality and costs. © 2011 de Bruin et al; licensee BioMed Central Ltd.
Pay-for-performance in disease management: a systematic review of the literature
2011-01-01
Background Pay-for-performance (P4P) is increasingly implemented in the healthcare system to encourage improvements in healthcare quality. P4P is a payment model that rewards healthcare providers for meeting pre-established targets for delivery of healthcare services by financial incentives. Based on their performance, healthcare providers receive either additional or reduced payment. Currently, little is known about P4P schemes intending to improve delivery of chronic care through disease management. The objectives of this paper are therefore to provide an overview of P4P schemes used to stimulate delivery of chronic care through disease management and to provide insight into their effects on healthcare quality and costs. Methods A systematic PubMed search was performed for English language papers published between 2000 and 2010 describing P4P schemes related to the implementation of disease management. Wagner's chronic care model was used to make disease management operational. Results Eight P4P schemes were identified, introduced in the USA (n = 6), Germany (n = 1), and Australia (n = 1). Five P4P schemes were part of a larger scheme of interventions to improve quality of care, whereas three P4P schemes were solely implemented. Most financial incentives were rewards, selective, and granted on the basis of absolute performance. More variation was found in incented entities and the basis for providing incentives. Information about motivation, certainty, size, frequency, and duration of the financial incentives was generally limited. Five studies were identified that evaluated the effects of P4P on healthcare quality. Most studies showed positive effects of P4P on healthcare quality. No studies were found that evaluated the effects of P4P on healthcare costs. Conclusion The number of P4P schemes to encourage disease management is limited. Hardly any information is available about the effects of such schemes on healthcare quality and costs. PMID:21999234
Real-Time and Memory Correlation via Acousto-Optic Processing,
1978-06-01
acousto - optic technology as an answer to these requirements appears very attractive. Three fundamental signal-processing schemes using the acousto ... optic interaction have been investigated: (i) real-time correlation and convolution, (ii) Fourier and discrete Fourier transformation, and (iii
From network heterogeneities to familiarity detection and hippocampal memory management
Wang, Jane X.; Poe, Gina; Zochowski, Michal
2009-01-01
Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453
NASA Astrophysics Data System (ADS)
Kazakova, E. I.; Medvedev, A. N.; Kolomytseva, A. O.; Demina, M. I.
2017-11-01
The paper presents a mathematical model of blasting schemes management in presence of random disturbances. Based on the lemmas and theorems proved, a control functional is formulated, which is stable. A universal classification of blasting schemes is developed. The main classification attributes are suggested: the orientation in plan the charging wells rows relatively the block of rocks; the presence of cuts in the blasting schemes; the separation of the wells series onto elements; the sequence of the blasting. The periodic regularity of transition from one Short-delayed scheme of blasting to another is proved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, K.; Tsai, H.; Liu, Y. Y.
Radio frequency identification (RFID) is one of today's most rapidly growing technologies in the automatic data collection industry. Although commercial applications are already widespread, the use of this technology for managing nuclear materials is only in its infancy. Employing an RFID system has the potential to offer an immense payback: enhanced safety and security, reduced need for manned surveillance, real-time access to status and event history data, and overall cost-effectiveness. The Packaging Certification Program (PCP) in the U.S. Department of Energy's (DOE's) Office of Environmental Management (EM), Office of Packaging and Transportation (EM-63), is developing an RFID system for nuclearmore » materials management. The system consists of battery-powered RFID tags with onboard sensors and memories, a reader network, application software, a database server and web pages. The tags monitor and record critical parameters, including the status of seals, movement of objects, and environmental conditions of the nuclear material packages in real time. They also provide instant warnings or alarms when preset thresholds for the sensors are exceeded. The information collected by the readers is transmitted to a dedicated central database server that can be accessed by authorized users across the DOE complex via a secured network. The onboard memory of the tags allows the materials manifest and event history data to reside with the packages throughout their life cycles in storage, transportation, and disposal. Data security is currently based on Advanced Encryption Standard-256. The software provides easy-to-use graphical interfaces that allow access to all vital information once the security and privilege requirements are met. An innovative scheme has been developed for managing batteries in service for more than 10 years without needing to be changed. A miniature onboard dosimeter is being developed for applications that require radiation surveillance. A field demonstration of the RFID system was recently conducted to assess its performance. The preliminary results of the demonstration are reported in this paper.« less
CaLRS: A Critical-Aware Shared LLC Request Scheduling Algorithm on GPGPU
Ma, Jianliang; Meng, Jinglei; Chen, Tianzhou; Wu, Minghui
2015-01-01
Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly. PMID:25729772
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-02-25
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.
Long distance quantum communication using quantum error correction
NASA Technical Reports Server (NTRS)
Gingrich, R. M.; Lee, H.; Dowling, J. P.
2004-01-01
We describe a quantum error correction scheme that can increase the effective absorption length of the communication channel. This device can play the role of a quantum transponder when placed in series, or a cyclic quantum memory when inserted in an optical loop.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
Multimode cavity-assisted quantum storage via continuous phase-matching control
NASA Astrophysics Data System (ADS)
Kalachev, Alexey; Kocharovskaya, Olga
2013-09-01
A scheme for spatial multimode quantum memory is developed such that spatial-temporal structure of a weak signal pulse can be stored and recalled via cavity-assisted off-resonant Raman interaction with a strong angular-modulated control field in an extended Λ-type atomic ensemble. It is shown that effective multimode storage is possible when the Raman coherence spatial grating involves wave vectors with different longitudinal components relative to the paraxial signal field. The possibilities of implementing the scheme in the solid-state materials are discussed.
Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme
NASA Technical Reports Server (NTRS)
Vatsa, V. N.; Wedan, B. W.
1988-01-01
A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.
Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M
2007-08-25
Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.
NASA Astrophysics Data System (ADS)
Zhang, Chuang; Guo, Zhaoli; Chen, Songze
2017-12-01
An implicit kinetic scheme is proposed to solve the stationary phonon Boltzmann transport equation (BTE) for multiscale heat transfer problem. Compared to the conventional discrete ordinate method, the present method employs a macroscopic equation to accelerate the convergence in the diffusive regime. The macroscopic equation can be taken as a moment equation for phonon BTE. The heat flux in the macroscopic equation is evaluated from the nonequilibrium distribution function in the BTE, while the equilibrium state in BTE is determined by the macroscopic equation. These two processes exchange information from different scales, such that the method is applicable to the problems with a wide range of Knudsen numbers. Implicit discretization is implemented to solve both the macroscopic equation and the BTE. In addition, a memory reduction technique, which is originally developed for the stationary kinetic equation, is also extended to phonon BTE. Numerical comparisons show that the present scheme can predict reasonable results both in ballistic and diffusive regimes with high efficiency, while the memory requirement is on the same order as solving the Fourier law of heat conduction. The excellent agreement with benchmark and the rapid converging history prove that the proposed macro-micro coupling is a feasible solution to multiscale heat transfer problems.
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
NASA Astrophysics Data System (ADS)
Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.
2016-07-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
NASA Astrophysics Data System (ADS)
Tomaro, Robert F.
1998-07-01
The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.
Kaplan, Warren A; Ashigbie, Paul G; Brooks, Mohamad I; Wirtz, Veronika J
2017-01-01
Many middle-income countries are scaling up health insurance schemes to provide financial protection and access to affordable medicines to poor and uninsured populations. Although there is a wealth of evidence on how high income countries with mature insurance schemes manage cost-effective use of medicines, there is limited evidence on the strategies used in middle-income countries. This paper compares the medicines management strategies that four insurance schemes in middle-income countries use to improve access and cost-effective use of medicines among beneficiaries. We compare key strategies promoting cost-effective medicines use in the New Rural Cooperative Medical Scheme (NCMS) in China, National Health Insurance Scheme in Ghana, Jamkesmas in Indonesia and Seguro Popular in Mexico. Through the peer-reviewed and grey literature as of late 2013, we identified strategies that met our inclusion criteria as well as any evidence showing if, and/or how, these strategies affected medicines management. Stakeholders involved and affected by medicines coverage policies in these insurance schemes were asked to provide relevant documents describing the medicines related aspects of these insurance programs. We also asked them specifically to identify publications discussing the unintended consequences of the strategies implemented. Use of formularies, bulk procurement, standard treatment guidelines and separation of prescribing and dispensing were present in all four schemes. Also, increased transparency through publication of tender agreements and procurement prices was introduced in all four. Common strategies shared by three out of four schemes were medicine price negotiation or rebates, generic reference pricing, fixed salaries for prescribers, accredited preferred provider network, disease management programs, and monitoring of medicines purchases. Cost-sharing and payment for performance was rarely used. There was a lack of performance monitoring strategies in all schemes. Most of the strategies used in the insurance schemes focus on containing expenditure growth, including budget caps on pharmaceutical expenditures (Mexico) and ceiling prices on medicines (all four countries). There were few strategies targeting quality improvement as healthcare providers are mostly paid through fixed salaries, irrespective of the quality of their prescribing or the health outcomes actually achieved. Monitoring healthcare system performance has received little attention.
Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.
Hoya, T; Chambers, J A
2001-01-01
In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vineyard, Craig Michael; Verzi, Stephen Joseph
As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less
SMT-Aware Instantaneous Footprint Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Probir; Liu, Xu; Song, Shuaiwen
Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.
Remote preparation of an atomic quantum memory.
Rosenfeld, Wenjamin; Berner, Stefan; Volz, Jürgen; Weber, Markus; Weinfurter, Harald
2007-02-02
Storage and distribution of quantum information are key elements of quantum information processing and future quantum communication networks. Here, using atom-photon entanglement as the main physical resource, we experimentally demonstrate the preparation of a distant atomic quantum memory. Applying a quantum teleportation protocol on a locally prepared state of a photonic qubit, we realized this so-called remote state preparation on a single, optically trapped 87Rb atom. We evaluated the performance of this scheme by the full tomography of the prepared atomic state, reaching an average fidelity of 82%.
Zhang, Rui; Garner, Sean R; Hau, Lene Vestergaard
2009-12-04
A Bose-Einstein condensate confined in an optical dipole trap is used to generate long-term coherent memory for light, and storage times of more than 1 s are observed. Phase coherence of the condensate as well as controlled manipulations of elastic and inelastic atomic scattering processes are utilized to increase the storage fidelity by several orders of magnitude over previous schemes. The results have important applications for creation of long-distance quantum networks and for generation of entangled states of light and matter.
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
NASA Astrophysics Data System (ADS)
Wang, Tai-Min; Chien, Wei-Yu; Hsu, Chia-Ling; Lin, Chrong Jung; King, Ya-Chin
2018-04-01
In this paper, we present a new differential p-channel multiple-time programmable (MTP) memory cell that is fully compatible with advanced 16 nm CMOS fin field-effect transistors (FinFET) logic processes. This differential MTP cell stores complementary data in floating gates coupled by a slot contact structure, which make different read currents possible on a single cell. In nanoscale CMOS FinFET logic processes, the gate dielectric layer becomes too thin to retain charges inside floating gates for nonvolatile data storage. By using a differential architecture, the sensing window of the cell can be extended and maintained by an advanced blanket boost scheme. The charge retention problem in floating gate cells can be improved by periodic restoring lost charges when significant read window narrowing occurs. In addition to high programming efficiency, this p-channel MTP cells also exhibit good cycling endurance as well as disturbance immunity. The blanket boost scheme can remedy the charge loss problem under thin gate dielectrics.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
NASA Astrophysics Data System (ADS)
Gupta, Manish K.; Navarro, Erik J.; Moulder, Todd A.; Mueller, Jason D.; Balouchi, Ashkan; Brown, Katherine L.; Lee, Hwang; Dowling, Jonathan P.
2015-05-01
The storage of quantum states and its distribution over long distances is essential for emerging quantum technologies such as quantum networks and long distance quantum cryptography. The implementation of polarization-based quantum communication is limited by signal loss and decoherence caused by the birefringence of a single-mode fiber. We investigate the Knill dynamical decoupling scheme, implemented using half-wave plates in a single mode fiber, to minimize decoherence of polarization qubit and show that a fidelity greater than 99 % can be achieved in absence of rotation error and fidelity greater than 96 % can be achieved in presence of rotation error. Such a scheme can be used to preserve any quantum state with high fidelity and has potential application for constructing all optical quantum memory, quantum delay line, and quantum repeater. The authors would like to acknowledge the support from the Air Force office of Scientific Research, the Army Research office, and the National Science Foundation.
Foli, Samson; Ros-Tonen, Mirjam A F; Reed, James; Sunderland, Terry
2018-07-01
In recognition of the failures of sectoral approaches to overcome global challenges of biodiversity loss, climate change, food insecurity and poverty, scientific discourse on biodiversity conservation and sustainable development is shifting towards integrated landscape governance arrangements. Current landscape initiatives however very much depend on external actors and funding, raising the question of whether, and how, and under what conditions, locally embedded resource management schemes can serve as entry points for the implementation of integrated landscape approaches. This paper assesses the entry point potential for three established natural resource management schemes in West Africa that target landscape degradation with involvement of local communities: the Chantier d'Aménagement Forestier scheme encompassing forest management sites across Burkina Faso and the Modified Taungya System and community wildlife resource management initiatives in Ghana. Based on a review of the current literature, we analyze the extent to which design principles that define a landscape approach apply to these schemes. We found that the CREMA meets most of the desired criteria, but that its scale may be too limited to guarantee effective landscape governance, hence requiring upscaling. Conversely, the other two initiatives are strongly lacking in their design principles on fundamental components regarding integrated approaches, continual learning, and capacity building. Monitoring and evaluation bodies and participatory learning and negotiation platforms could enhance the schemes' alignment with integrated landscape approaches.
Reasons for withdrawing belief in vivid autobiographical memories.
Scoboria, Alan; Boucher, Chantal; Mazzoni, Giuliana
2015-01-01
Previous studies have shown that many people hold personal memories for events that they no longer believe occurred. This study examines the reasons that people provide for choosing to reduce autobiographical belief in vividly recollected autobiographical memories. A body of non-believed memories provided by 374 individuals was reviewed to develop a qualitatively derived categorisation system. The final scheme consisted of 8 major categories (in descending order of mention): social feedback, event plausibility, alternative attributions, general memory beliefs, internal event features, consistency with external evidence, views of self/others, personal motivation and numerous sub-categories. Independent raters coded the reports and judged the primary reason that each person provided for withdrawing belief. The nature of each category, frequency of category endorsement, category overlap and phenomenological ratings are presented, following which links to related literature and implications are discussed. This study documents that a wide variety of recollective and non-recollective sources of information influence decision-making about the occurrence of autobiographical events.
An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, R; Stolken, J; Jannetti, C
Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numericalmore » simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.« less
Protecting solid-state spins from a strongly coupled environment
NASA Astrophysics Data System (ADS)
Chen, Mo; Calvin Sun, Won Kyu; Saha, Kasturi; Jaskula, Jean-Christophe; Cappellaro, Paola
2018-06-01
Quantum memories are critical for solid-state quantum computing devices and a good quantum memory requires both long storage time and fast read/write operations. A promising system is the nitrogen-vacancy (NV) center in diamond, where the NV electronic spin serves as the computing qubit and a nearby nuclear spin as the memory qubit. Previous works used remote, weakly coupled 13C nuclear spins, trading read/write speed for long storage time. Here we focus instead on the intrinsic strongly coupled 14N nuclear spin. We first quantitatively understand its decoherence mechanism, identifying as its source the electronic spin that acts as a quantum fluctuator. We then propose a scheme to protect the quantum memory from the fluctuating noise by applying dynamical decoupling on the environment itself. We demonstrate a factor of 3 enhancement of the storage time in a proof-of-principle experiment, showing the potential for a quantum memory that combines fast operation with long coherence time.
Drawing a dog: The role of working memory and executive function.
Panesi, Sabrina; Morra, Sergio
2016-12-01
Previous research suggests that young children draw animals by adapting their scheme for the human figure. This can be considered an early form of drawing flexibility. This study investigated preschoolers' ability to draw a dog that is different from the human figure. The role of working memory capacity and executive function was examined. The participants were 123 children (36-73 months old) who were required to draw both a person and a dog. The dog figure was scored on a list of features that could render it different from the human figure. Regression analyses showed that both working memory capacity and executive function predicted development in the dog drawing; the dog drawing score correlated with working memory capacity and executive function, even partialling out age, motor coordination, and drawing ability (measured with Goodenough's Draw-a-Man test). These results suggest that both working memory capacity and executive function play an important role in the early development of drawing flexibility. The implications regarding executive functions and working memory are also discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Memory recall and spike-frequency adaptation
NASA Astrophysics Data System (ADS)
Roach, James P.; Sander, Leonard M.; Zochowski, Michal R.
2016-05-01
The brain can reproduce memories from partial data; this ability is critical for memory recall. The process of memory recall has been studied using autoassociative networks such as the Hopfield model. This kind of model reliably converges to stored patterns that contain the memory. However, it is unclear how the behavior is controlled by the brain so that after convergence to one configuration, it can proceed with recognition of another one. In the Hopfield model, this happens only through unrealistic changes of an effective global temperature that destabilizes all stored configurations. Here we show that spike-frequency adaptation (SFA), a common mechanism affecting neuron activation in the brain, can provide state-dependent control of pattern retrieval. We demonstrate this in a Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In both cases, SFA allows for selective stabilization of attractors with different basins of attraction, and also for temporal dynamics of attractor switching that is not possible in standard autoassociative schemes. The dynamics of our models give a plausible account of different sorts of memory retrieval.
Interactive high-resolution isosurface ray casting on multicore processors.
Wang, Qin; JaJa, Joseph
2008-01-01
We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
Ahmed, Abdulghani Ali; Xue Li, Chua
2018-01-01
Cloud storage service allows users to store their data online, so that they can remotely access, maintain, manage, and back up data from anywhere via the Internet. Although helpful, this storage creates a challenge to digital forensic investigators and practitioners in collecting, identifying, acquiring, and preserving evidential data. This study proposes an investigation scheme for analyzing data remnants and determining probative artifacts in a cloud environment. Using pCloud as a case study, this research collected the data remnants available on end-user device storage following the storing, uploading, and accessing of data in the cloud storage. Data remnants are collected from several sources, including client software files, directory listing, prefetch, registry, network PCAP, browser, and memory and link files. Results demonstrate that the collected remnants data are beneficial in determining a sufficient number of artifacts about the investigated cybercrime. © 2017 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Minnegaliev, M. M.; Dyakonov, I. V.; Gerasimov, K. I.; Kalinkin, A. A.; Kulik, S. P.; Moiseev, S. A.; Saygin, M. Yu; Urmancheev, R. V.
2018-04-01
We produced optical waveguides in the 167Er3+:7 LiYF4 crystal with diameters ranging from 30 to 100 μm by using the depressed-cladding approach with femtosecond laser. Stationary and coherent spectroscopy was performed on the 809 nm optical transitions between the hyperfine sublevels of 4I15/2 and 4I9/2 multiplets of 167Er3+ ions both inside and outside of waveguides. It was found that the spectra of 167Er3+ were slightly broadened and shifted inside the waveguides compared to the bulk crystal spectra. We managed to observe a two-pulse photon echo on this transition and determined phase relaxation times for each waveguide. The experimental results show that the created crystal waveguides doped by rare-earth ions can be used in optical quantum memory and integrated quantum schemes.
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
Vacuum-induced quantum memory in an opto-electromechanical system
NASA Astrophysics Data System (ADS)
Qin, Li-Guo; Wang, Zhong-Yang; Wu, Shi-Chao; Gong, Shang-Qing; Ma, Hong-Yang; Jing, Jun
2018-03-01
We propose a scheme to implement electrically controlled quantum memory based on vacuum-induced transparency (VIT) in a high-Q tunable cavity, which is capacitively coupled to a mechanically variable capacitor by a charged mechanical cavity mirror as an interface. We analyze the changes of the cavity photons arising from vacuum-induced-Raman process and discuss VIT in an atomic ensemble trapped in the cavity. By slowly adjusting the voltage on the capacitor, the VIT can be adiabatically switched on or off, meanwhile, the transfer between the probe photon state and the atomic spin state can be electrically and adiabatically modulated. Therefore, we demonstrate a vacuum-induced quantum memory by electrically manipulating the mechanical mirror of the cavity based on electromagnetically induced transparency mechanism.
[Short-term memory characteristics of vibration intensity tactile perception on human wrist].
Hao, Fei; Chen, Li-Juan; Lu, Wei; Song, Ai-Guo
2014-12-25
In this study, a recall experiment and a recognition experiment were designed to assess the human wrist's short-term memory characteristics of tactile perception on vibration intensity, by using a novel homemade vibrotactile display device based on the spatiotemporal combination vibration of multiple micro vibration motors as a test device. Based on the obtained experimental data, the short-term memory span, recognition accuracy and reaction time of vibration intensity were analyzed. From the experimental results, some important conclusions can be made: (1) The average short-term memory span of tactile perception on vibration intensity is 3 ± 1 items; (2) The greater difference between two adjacent discrete intensities of vibrotactile stimulation is defined, the better average short-term memory span human wrist gets; (3) There is an obvious difference of the average short-term memory span on vibration intensity between the male and female; (4) The mechanism of information extraction in short-term memory of vibrotactile display is to traverse the scanning process by comparison; (5) The recognition accuracy and reaction time performance of vibrotactile display compares unfavourably with that of visual and auditory. The results from this study are important for designing vibrotactile display coding scheme.
CMOS imager for pointing and tracking applications
NASA Technical Reports Server (NTRS)
Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)
2006-01-01
Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
Spiers Memorial Lecture. Molecular mechanics and molecular electronics.
Beckman, Robert; Beverly, Kris; Boukai, Akram; Bunimovich, Yuri; Choi, Jang Wook; DeIonno, Erica; Green, Johnny; Johnston-Halperin, Ezekiel; Luo, Yi; Sheriff, Bonnie; Stoddart, Fraser; Heath, James R
2006-01-01
We describe our research into building integrated molecular electronics circuitry for a diverse set of functions, and with a focus on the fundamental scientific issues that surround this project. In particular, we discuss experiments aimed at understanding the function of bistable rotaxane molecular electronic switches by correlating the switching kinetics and ground state thermodynamic properties of those switches in various environments, ranging from the solution phase to a Langmuir monolayer of the switching molecules sandwiched between two electrodes. We discuss various devices, low bit-density memory circuits, and ultra-high density memory circuits that utilize the electrochemical switching characteristics of these molecules in conjunction with novel patterning methods. We also discuss interconnect schemes that are capable of bridging the micrometre to submicrometre length scales of conventional patterning approaches to the near-molecular length scales of the ultra-dense memory circuits. Finally, we discuss some of the challenges associated with fabricated ultra-dense molecular electronic integrated circuits.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Non-volatile main memory management methods based on a file system.
Oikawa, Shuichi
2014-01-01
There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-01-01
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism. PMID:29495346
Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File
2008-12-01
Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across
NASA Astrophysics Data System (ADS)
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...
NASA Astrophysics Data System (ADS)
Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.
2011-12-01
Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.
Is soil moisture initialization important for seasonal to decadal predictions?
NASA Astrophysics Data System (ADS)
Stacke, Tobias; Hagemann, Stefan
2014-05-01
The state of soil moisture can can have a significant impact on regional climate conditions for short time scales up to several months. However, focusing on seasonal to decadal time scales, it is not clear whether the predictive skill of global a Earth System Model might be enhanced by assimilating soil moisture data or improving the initial soil moisture conditions with respect to observations. As a first attempt to provide answers to this question, we set up an experiment to investigate the life time (memory) of extreme soil moisture states in the coupled land-atmosphere model ECHAM6-JSBACH, which is part of the Max Planck Institute for Meteorology's Earth System Model (MPI-ESM). This experiment consists of an ensemble of 3 years simulations which are initialized with extreme wet and dry soil moisture states for different seasons and years. Instead of using common thresholds like wilting point or critical soil moisture, the extreme states were extracted from a reference simulation to ensure that they are within the range of simulated climate variability. As a prerequisite for this experiment, the soil hydrology in JSBACH was improved by replacing the bucket-type soil hydrology scheme with a multi-layer scheme. This new scheme is a more realistic representation of the soil, including percolation and diffusion fluxes between up to five separate layers, the limitation of bare soil evaporation to the uppermost soil layer and the addition of a long term water storage below the root zone in regions with deep soil. While the hydrological cycle is not strongly affected by this new scheme, it has some impact on the simulated soil moisture memory which is mostly strengthened due to the additional deep layer water storage. Ensemble statistics of the initialization experiment indicate perturbation lengths between just a few days up to several seasons for some regions. In general, the strongest effects are seen for wet initialization during northern winter over cold and humid regions, while the shortest memory is found during northern spring. For most regions, the soil moisture memory is either sensitive to wet or to dry perturbations, indicating that soil moisture anomalies interact with the respective weather pattern for a given year and might be able to enhance or dampen extreme conditions. To further investigate this effect, the simulations will be repeated using JSBACH with prescribed meteorological forcing to better disentangle the direct effects of soil moisture initialization and the atmospheric response.
A Regev-type fully homomorphic encryption scheme using modulus switching.
Chen, Zhigang; Wang, Jian; Chen, Liqun; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level.
Brooding Is Related to Neural Alterations during Autobiographical Memory Retrieval in Aging
Schneider, Sophia; Brassen, Stefanie
2016-01-01
Brooding rumination is considered a central aspect of depression in midlife. As older people tend to review their past, rumination tendency might be particularly crucial in late life since it might hinder older adults to adequately evaluate previous events. We scanned 22 non-depressed older adults with varying degrees of brooding tendency with functional magnetic resonance imaging (MRI) while they performed the construction and elaboration of autobiographical memories. Behavioral findings demonstrate that brooders reported lower mood states, needed more time for memory construction and rated their memories as less detailed and less positive. On the neural level, brooding tendency was related to increased amygdala activation during the search for specific memories and reduced engagement of cortical networks during elaboration. Moreover, coupling patterns of the subgenual cingulate cortex with the hippocampus (HC) and the amygdala predicted details and less positive valence of memories in brooders. Our findings support the hypothesis that ruminative thinking interferes with the search for specific memories while facilitating the uncontrolled retrieval of negatively biased self-schemes. The observed neurobehavioral dysfunctions might put older people with brooding tendency at high risk for becoming depressed when reviewing their past. Training of autobiographical memory ability might therefore be a promising approach to increase resilience against depression in late-life. PMID:27695414
Campbell, Princess Christina; Korie, Patrick Chukwuemeka; Nnaji, Feziechukwu Collins
2014-01-01
Background: The National Health Insurance Scheme (NHIS), operated majorly in Nigeria by health maintenance organisations (HMOs), took off formally in June 2005. In view of the inherent risks in the operation of any social health insurance, it is necessary to efficiently manage these risks for sustainability of the scheme. Consequently the risk-management strategies deployed by HMOs need regular assessment. This study assessed the risk management in the Nigeria social health insurance scheme among HMOs. Materials and Methods: Cross-sectional survey of 33 HMOs participating in the NHIS. Results: Utilisation of standard risk-management strategies by the HMOs was 11 (52.6%). The other risk-management strategies not utilised in the NHIS 10 (47.4%) were risk equalisation and reinsurance. As high as 11 (52.4%) of participating HMOs had a weak enrollee base (less than 30,000 and poor monthly premium and these impacted negatively on the HMOs such that a large percentage 12 (54.1%) were unable to meet up with their financial obligations. Most of the HMOs 15 (71.4%) participated in the Millennium development goal (MDG) maternal and child health insurance programme. Conclusions: Weak enrollee base and poor monthly premium predisposed the HMOs to financial risk which impacted negatively on the overall performance in service delivery in the NHIS, further worsened by the non-utilisation of risk equalisation and reinsurance as risk-management strategies in the NHIS. There is need to make the scheme compulsory and introduce risk equalisation and reinsurance. PMID:25298605
Healthcare knowledge management through building and operationalising healthcare enterprise memory.
Cheah, Y N; Abidi, S S
1999-01-01
In this paper we suggest that the healthcare enterprise needs to be more conscious of its vast knowledge resources vis-à-vis the exploitation of knowledge management techniques to efficiently manage its knowledge. The development of healthcare enterprise memory is suggested as a solution, together with a novel approach advocating the operationalisation of healthcare enterprise memories leading to the modelling of healthcare processes for strategic planning. As an example, we present a simulation of Service Delivery Time in a hospital's OPD.
An Inviscid Decoupled Method for the Roe FDS Scheme in the Reacting Gas Path of FUN3D
NASA Technical Reports Server (NTRS)
Thompson, Kyle B.; Gnoffo, Peter A.
2016-01-01
An approach is described to decouple the species continuity equations from the mixture continuity, momentum, and total energy equations for the Roe flux difference splitting scheme. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This work lays the foundation for development of an efficient adjoint solution procedure for high speed reacting flow.
Feshbach resonance management for Bose-Einstein condensates.
Kevrekidis, P G; Theocharis, G; Frantzeskakis, D J; Malomed, Boris A
2003-06-13
An experimentally realizable scheme of periodic sign-changing modulation of the scattering length is proposed for Bose-Einstein condensates similar to dispersion-management schemes in fiber optics. Because of controlling the scattering length via the Feshbach resonance, the scheme is named Feshbach-resonance management. The modulational-instability analysis of the quasiuniform condensate driven by this scheme leads to an analog of the Kronig-Penney model. The ensuing stable localized structures are found. These include breathers, which oscillate between the Thomas-Fermi and Gaussian configuration, or may be similar to the 2-soliton state of the nonlinear Schrödinger equation, and a nearly static state ("odd soliton") with a nested dark soliton. An overall phase diagram for breathers is constructed, and full stability of the odd solitons is numerically established.
A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Qiu, Jianxian
2017-11-01
In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.
Market behavior and performance of different strategy evaluation schemes.
Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong
2010-08-01
Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents' strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme's success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batiy, V.G.; Stojanov, A.I.; Schmieman, E.
2007-07-01
Methodological approach of optimization of schemes of solid radwaste management of the Object Shelter (Shelter) and ChNPP industrial site during transformation to the ecologically safe system was developed. On the basis of the conducted models researches the ALARA-analysis was carried out for the choice of optimum variant of schemes and technologies of solid radwaste management. The criteria of choice of optimum schemes, which are directed on optimization of doses and financial expenses, minimization of amount of the formed radwaste etc, were developed for realization of this ALARA-analysis. (authors)
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1982-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1984-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Analytic redundancy management for SCOLE
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.
1988-01-01
The objective of this work is to develop a practical sensor analytic redundancy management scheme for flexible spacecraft and to demonstrate it using the SCOLE experimental apparatus. The particular scheme to be used is taken from previous work on the Grid apparatus by Williams and Montgomery.
A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei
2016-01-01
With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624
A scheme of hidden-structure attribute-based encryption with multiple authorities
NASA Astrophysics Data System (ADS)
Ling, J.; Weng, A. X.
2018-05-01
In the most of the CP-ABE schemes with hidden access structure, both all the user attributes and the key generation are managed by only one authority. The key generation efficiency will decrease as the number of user increases, and the data will encounter security issues as the only authority is attacked. We proposed a scheme of hidden-structure attribute-based encryption with multiple authorities, which introduces multiple semi-trusted attribute authorities, avoiding the threat even though one or more authorities are attacked. We also realized user revocation by managing a revocation list. Based on DBDH assumption, we proved that our scheme is of IND-CMA security. The analysis shows that our scheme improves the key generation efficiency.
Inverse halftoning via robust nonlinear filtering
NASA Astrophysics Data System (ADS)
Shen, Mei-Yin; Kuo, C.-C. Jay
1999-10-01
A new blind inverse halftoning algorithm based on a nonlinear filtering technique of low computational complexity and low memory requirement is proposed in this research. It is called blind since we do not require the knowledge of the halftone kernel. The proposed scheme performs nonlinear filtering in conjunction with edge enhancement to improve the quality of an inverse halftoned image. Distinct features of the proposed approach include: efficiently smoothing halftone patterns in large homogeneous areas, additional edge enhancement capability to recover the edge quality and an excellent PSNR performance with only local integer operations and a small memory buffer.
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Low-Density Parity-Check Code Design Techniques to Simplify Encoding
NASA Astrophysics Data System (ADS)
Perez, J. M.; Andrews, K.
2007-11-01
This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.
All linear optical quantum memory based on quantum error correction.
Gingrich, Robert M; Kok, Pieter; Lee, Hwang; Vatan, Farrokh; Dowling, Jonathan P
2003-11-21
When photons are sent through a fiber as part of a quantum communication protocol, the error that is most difficult to correct is photon loss. Here we propose and analyze a two-to-four qubit encoding scheme, which can recover the loss of one qubit in the transmission. This device acts as a repeater, when it is placed in series to cover a distance larger than the attenuation length of the fiber, and it acts as an optical quantum memory, when it is inserted in a fiber loop. We call this dual-purpose device a "quantum transponder."
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
ERIC Educational Resources Information Center
Amoroso, Lisa M.; Loyd, Denise Lewin; Hoobler, Jenny M.
2012-01-01
The Fritz J. Roethlisberger Memorial Award for the best article in the 2011 "Journal of Management Education" goes to Rae Andre for her article, Using Leadered Groups in Organizational Behavior and Management Survey Courses ("Journal of Management Education," Volume 35, Number 5, pp. 596-619). In keeping with Roethlisberger's legacy, this year's…
Science Planning and Orbit Classification for Solar Probe Plus
NASA Astrophysics Data System (ADS)
Kusterer, M. B.; Fox, N. J.; Rodgers, D. J.; Turner, F. S.
2016-12-01
There are a number of challenges for the Science Planning Team (SPT) of the Solar Probe Plus (SPP) Mission. Since SPP is using a decoupled payload operations approach, tight coordination between the mission operations and payload teams will be required. The payload teams must manage the volume of data that they write to the spacecraft solid-state recorders (SSR) for their individual instruments for downlink to the ground. Making this process more difficult, the geometry of the celestial bodies and the spacecraft during some of the SPP mission orbits cause limited uplink and downlink opportunities. The payload teams will also be required to coordinate power on opportunities, command uplink opportunities, and data transfers from instrument memory to the spacecraft SSR with the operation team. The SPT also intend to coordinate observations with other spacecraft and ground based systems. To solve these challenges, detailed orbit activity planning is required in advance for each orbit. An orbit planning process is being created to facilitate the coordination of spacecraft and payload activities for each orbit. An interactive Science Planning Tool is being designed to integrate the payload data volume and priority allocations, spacecraft ephemeris, attitude, downlink and uplink schedules, spacecraft and payload activities, and other spacecraft ephemeris. It will be used during science planning to select the instrument data priorities and data volumes that satisfy the orbit data volume constraints and power on, command uplink and data transfer time periods. To aid in the initial stages of science planning we have created an orbit classification scheme based on downlink availability and significant science events. Different types of challenges arise in the management of science data driven by orbital geometry and operational constraints, and this scheme attempts to identify the patterns that emerge.
Resource efficient data compression algorithms for demanding, WSN based biomedical applications.
Antonopoulos, Christos P; Voros, Nikolaos S
2016-02-01
During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the compression rate is concerned. Copyright © 2015 Elsevier Inc. All rights reserved.
Buffer Management Simulation in ATM Networks
NASA Technical Reports Server (NTRS)
Yaprak, E.; Xiao, Y.; Chronopoulos, A.; Chow, E.; Anneberg, L.
1998-01-01
This paper presents a simulation of a new dynamic buffer allocation management scheme in ATM networks. To achieve this objective, an algorithm that detects congestion and updates the dynamic buffer allocation scheme was developed for the OPNET simulation package via the creation of a new ATM module.
Centrally managed unified shared virtual address space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkes, John
Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less
The scheme machine: A case study in progress in design derivation at system levels
NASA Technical Reports Server (NTRS)
Johnson, Steven D.
1995-01-01
The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.
NASA Astrophysics Data System (ADS)
Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick
2015-12-01
Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.
Qubit-loss-free fusion of atomic W states via photonic detection
NASA Astrophysics Data System (ADS)
Ding, Cheng-Yun; Kong, Fan-Zhen; Yang, Qing; Yang, Ming; Cao, Zhuo-Liang
2018-06-01
In this paper, we propose two new qubit-loss-free (QLF) fusion schemes for W states in cavity QED system. Resonant interactions between atoms and single cavity mode constitute the main fusion mechanism, with which atomic |W_{n+m}> and |W_{n+m+q}> states can be generated, respectively, from a |Wn> and a |Wm>; and from a |Wn>, a |Wm> and a |Wq>, by detecting the cavity mode. The QLF property of the schemes makes them more efficient and simpler than the currently existing ones, and fewer intermediate steps and memory resources are required for generating a target large-scale W state. Furthermore, the fusion of atomic states can be realized via the detection on cavity mode rather than the much complicated atomic detection, which makes our schemes feasible. In addition, the analyses of the optimal resource cost and the experimental feasibility indicate that the present schemes are simple and efficient, and maybe implementable within the current experimental techniques.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Brown, Adam D; Addis, Donna Rose; Romano, Tracy A; Marmar, Charles R; Bryant, Richard A; Hirst, William; Schacter, Daniel L
2014-01-01
Individuals with post-traumatic stress disorder (PTSD) tend to retrieve autobiographical memories with less episodic specificity, referred to as overgeneralised autobiographical memory. In line with evidence that autobiographical memory overlaps with one's capacity to imagine the future, recent work has also shown that individuals with PTSD also imagine themselves in the future with less episodic specificity. To date most studies quantify episodic specificity by the presence of a distinct event. However, this method does not distinguish between the numbers of internal (episodic) and external (semantic) details, which can provide additional insights into remembering the past and imagining the future. This study employed the Autobiographical Interview (AI) coding scheme to the autobiographical memory and imagined future event narratives generated by combat veterans with and without PTSD. Responses were coded for the number of internal and external details. Compared to combat veterans without PTSD, those with PTSD generated more external than internal details when recalling past or imagining future events, and fewer internal details were associated with greater symptom severity. The potential mechanisms underlying these bidirectional deficits and clinical implications are discussed.
Adaptive efficient compression of genomes
2012-01-01
Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997
Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.
Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin
2015-01-01
Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.
Solid State Spin-Wave Quantum Memory for Time-Bin Qubits.
Gündoğan, Mustafa; Ledingham, Patrick M; Kutluer, Kutlu; Mazzera, Margherita; de Riedmatten, Hugues
2015-06-12
We demonstrate the first solid-state spin-wave optical quantum memory with on-demand read-out. Using the full atomic frequency comb scheme in a Pr(3+):Y2SiO5 crystal, we store weak coherent pulses at the single-photon level with a signal-to-noise ratio >10. Narrow-band spectral filtering based on spectral hole burning in a second Pr(3+):Y2SiO5 crystal is used to filter out the excess noise created by control pulses to reach an unconditional noise level of (2.0±0.3)×10(-3) photons per pulse. We also report spin-wave storage of photonic time-bin qubits with conditional fidelities higher than achievable by a measure and prepare strategy, demonstrating that the spin-wave memory operates in the quantum regime. This makes our device the first demonstration of a quantum memory for time-bin qubits, with on-demand read-out of the stored quantum information. These results represent an important step for the use of solid-state quantum memories in scalable quantum networks.
A Regev-Type Fully Homomorphic Encryption Scheme Using Modulus Switching
Chen, Zhigang; Wang, Jian; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level. PMID:25093212
Brady, Timothy F; Konkle, Talia; Alvarez, George A
2009-11-01
The information that individuals can hold in working memory is quite limited, but researchers have typically studied this capacity using simple objects or letter strings with no associations between them. However, in the real world there are strong associations and regularities in the input. In an information theoretic sense, regularities introduce redundancies that make the input more compressible. The current study shows that observers can take advantage of these redundancies, enabling them to remember more items in working memory. In 2 experiments, covariance was introduced between colors in a display so that over trials some color pairs were more likely to appear than other color pairs. Observers remembered more items from these displays than from displays where the colors were paired randomly. The improved memory performance cannot be explained by simply guessing the high-probability color pair, suggesting that observers formed more efficient representations to remember more items. Further, as observers learned the regularities, their working memory performance improved in a way that is quantitatively predicted by a Bayesian learning model and optimal encoding scheme. These results suggest that the underlying capacity of the individuals' working memory is unchanged, but the information they have to remember can be encoded in a more compressed fashion. Copyright 2009 APA
Memory Compression Techniques for Network Address Management in MPI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yanfei; Archer, Charles J.; Blocksome, Michael
MPI allows applications to treat processes as a logical collection of integer ranks for each MPI communicator, while internally translating these logical ranks into actual network addresses. In current MPI implementations the management and lookup of such network addresses use memory sizes that are proportional to the number of processes in each communicator. In this paper, we propose a new mechanism, called AV-Rankmap, for managing such translation. AV-Rankmap takes advantage of logical patterns in rank-address mapping that most applications naturally tend to have, and it exploits the fact that some parts of network address structures are naturally more performance criticalmore » than others. It uses this information to compress the memory used for network address management. We demonstrate that AV-Rankmap can achieve performance similar to or better than that of other MPI implementations while using significantly less memory.« less
An Interaction of Screen Colour and Lesson Task in CAL
ERIC Educational Resources Information Center
Clariana, Roy B.
2004-01-01
Colour is a common feature in computer-aided learning (CAL), though the instructional effects of screen colour are not well understood. This investigation considers the effects of different CAL study tasks with feedback on posttest performance and on posttest memory of the lesson colour scheme. Graduate students (n=68) completed a computer-based…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannetti, C.; Becker, R.
The software is an ABAQUS/Standard UMAT (user defined material behavior subroutine) that implements the constitutive model for shape-memory alloy materials developed by Jannetti et. al. (2003a) using a fully implicit time integration scheme to integrate the constitutive equations. The UMAT is used in conjunction with ABAQUS/Standard to perform a finite-element analysis of SMA materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shkuratnik, V.L.; Filimonov, Y.L.; Kuchurin, S.V.
2007-01-15
The experimental data are presented for the features of formation and manifestation of the acoustic-emission and deformation memory effects in specimens of anthracite at different stages of the triaxial cyclic deformation by the Karman scheme in the pre-limiting and post-limiting zones.
The Effects of Televised Preplays on Children's Attention and Comprehension.
ERIC Educational Resources Information Center
Calvert, Sandra L.
The purpose of this study was to assess developmental differences in children's visual attention to, and comprehension of, a prosocial television program as a function of varying "preplay" formats. (Preplays were defined as advance organizers designed to help a child select, order, and integrate critical televised content into a memory scheme.) To…
Poetry in the Adult Literacy Classroom. Teacher to Teacher.
ERIC Educational Resources Information Center
Padak, Nancy
Some adult learners and teachers have negative memories of their previous encounters with poetry because too much emphasis was placed on the poem's "intent" or dissecting poems to determine their rhyme schemes. However, poetry can be an effective complement to instruction in adult literacy classrooms and can serve as an effective instructional…
Fast, Massively Parallel Data Processors
NASA Technical Reports Server (NTRS)
Heaton, Robert A.; Blevins, Donald W.; Davis, ED
1994-01-01
Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.
Introduction to Parallel Computing
1992-05-01
Instruction Stream, Multiple Data Stream Machines .................... 19 2.4 Networks of M achines...independent memory units and connecting them to the processors by an interconnection network . Many different interconnection schemes have been considered, and...connected to the same processor at the same time. Crossbar switching networks are still too expensive to be practical for connecting large numbers of
How to Speak an Authentication Secret Securely from an Eavesdropper
NASA Astrophysics Data System (ADS)
O'Gorman, Lawrence; Brotman, Lynne; Sammon, Michael
When authenticating over the telephone or mobile headphone, the user cannot always assure that no eavesdropper hears the password or authentication secret. We describe an eavesdropper-resistant, challenge-response authentication scheme for spoken authentication where an attacker can hear the user’s voiced responses. This scheme entails the user to memorize a small number of plaintext-ciphertext pairs. At authentication, these are challenged in random order and interspersed with camouflage elements. It is shown that the response can be made to appear random so that no information on the memorized secret can be learned by eavesdroppers. We describe the method along with parameter value tradeoffs of security strength, authentication time, and memory effort. This scheme was designed for user authentication of wireless headsets used for hands-free communication by healthcare staff at a hospital.
Spin-orbit torque induced magnetic vortex polarity reversal utilizing spin-Hall effect
NASA Astrophysics Data System (ADS)
Li, Cheng; Cai, Li; Liu, Baojun; Yang, Xiaokuo; Cui, Huanqing; Wang, Sen; Wei, Bo
2018-05-01
We propose an effective magnetic vortex polarity reversal scheme that makes use of spin-orbit torque introduced by spin-Hall effect in heavy-metal/ferromagnet multilayers structure, which can result in subnanosecond polarity reversal without endangering the structural stability. Micromagnetic simulations are performed to investigate the spin-Hall effect driven dynamics evolution of magnetic vortex. The mechanism of magnetic vortex polarity reversal is uncovered by a quantitative analysis of exchange energy density, magnetostatic energy density, and their total energy density. The simulation results indicate that the magnetic vortex polarity is reversed through the nucleation-annihilation process of topological vortex-antivortex pair. This scheme is an attractive option for ultra-fast magnetic vortex polarity reversal, which can be used as the guidelines for the choice of polarity reversal scheme in vortex-based random access memory.
Global Static Indexing for Real-Time Exploration of Very Large Regular Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pascucci, V; Frank, R
2001-07-23
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less
Shape memory alloy wire for self-sensing servo actuation
NASA Astrophysics Data System (ADS)
Josephine Selvarani Ruth, D.; Dhanalakshmi, K.
2017-01-01
This paper reports on the development of a straightforward approach to realise self-sensing shape memory alloy (SMA) wire actuated control. A differential electrical resistance measurement circuit (the sensorless signal conditioning (SSC) circuit) is designed; this sensing signal is directly used as the feedback for control. Antagonistic SMA wire actuators designed for servo actuation is realized in self-sensing actuation (SSA) mode for direct control with the differential electrical resistance feedback. The self-sensing scheme is established on a 1-DOF manipulator with the discrete time sliding mode controls which demonstrates good control performance, whatever be the disturbance and loading conditions. The uniqueness of this work is the design of the generic electronic SSC circuit for SMA actuated system, for measurement and control. With a concern to the implementation of self-sensing technique in SMA, this scheme retains the systematic control architecture by using the sensing signal (self-sensed, electrical resistance corresponding to the system position) for feedback, without requiring any processing as that of the methods adopted and reported previously for SSA techniques of SMA.
Fast Entanglement Establishment via Local Dynamics for Quantum Repeater Networks
NASA Astrophysics Data System (ADS)
Gyongyosi, Laszlo; Imre, Sandor
Quantum entanglement is a necessity for future quantum communication networks, quantum internet, and long-distance quantum key distribution. The current approaches of entanglement distribution require high-delay entanglement transmission, entanglement swapping to extend the range of entanglement, high-cost entanglement purification, and long-lived quantum memories. We introduce a fundamental protocol for establishing entanglement in quantum communication networks. The proposed scheme does not require entanglement transmission between the nodes, high-cost entanglement swapping, entanglement purification, or long-lived quantum memories. The protocol reliably establishes a maximally entangled system between the remote nodes via dynamics generated by local Hamiltonians. The method eliminates the main drawbacks of current schemes allowing fast entanglement establishment with a minimized delay. Our solution provides a fundamental method for future long-distance quantum key distribution, quantum repeater networks, quantum internet, and quantum-networking protocols. This work was partially supported by the GOP-1.1.1-11-2012-0092 project sponsored by the EU and European Structural Fund, by the Hungarian Scientific Research Fund - OTKA K-112125, and by the COST Action MP1006.
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
Kim, Gyungock; Park, Hyundai; Joo, Jiho; Jang, Ki-Seok; Kwack, Myung-Joon; Kim, Sanghoon; Kim, In Gyoo; Oh, Jin Hyuk; Kim, Sun Ae; Park, Jaegyu; Kim, Sanggi
2015-06-10
When silicon photonic integrated circuits (PICs), defined for transmitting and receiving optical data, are successfully monolithic-integrated into major silicon electronic chips as chip-level optical I/Os (inputs/outputs), it will bring innovative changes in data computing and communications. Here, we propose new photonic integration scheme, a single-chip optical transceiver based on a monolithic-integrated vertical photonic I/O device set including light source on bulk-silicon. This scheme can solve the major issues which impede practical implementation of silicon-based chip-level optical interconnects. We demonstrated a prototype of a single-chip photonic transceiver with monolithic-integrated vertical-illumination type Ge-on-Si photodetectors and VCSELs-on-Si on the same bulk-silicon substrate operating up to 50 Gb/s and 20 Gb/s, respectively. The prototype realized 20 Gb/s low-power chip-level optical interconnects for λ ~ 850 nm between fabricated chips. This approach can have a significant impact on practical electronic-photonic integration in high performance computers (HPC), cpu-memory interface, hybrid memory cube, and LAN, SAN, data center and network applications.
Stochastic quasi-Newton molecular simulations
NASA Astrophysics Data System (ADS)
Chau, C. D.; Sevink, G. J. A.; Fraaije, J. G. E. M.
2010-08-01
We report a new and efficient factorized algorithm for the determination of the adaptive compound mobility matrix B in a stochastic quasi-Newton method (S-QN) that does not require additional potential evaluations. For one-dimensional and two-dimensional test systems, we previously showed that S-QN gives rise to efficient configurational space sampling with good thermodynamic consistency [C. D. Chau, G. J. A. Sevink, and J. G. E. M. Fraaije, J. Chem. Phys. 128, 244110 (2008)10.1063/1.2943313]. Potential applications of S-QN are quite ambitious, and include structure optimization, analysis of correlations and automated extraction of cooperative modes. However, the potential can only be fully exploited if the computational and memory requirements of the original algorithm are significantly reduced. In this paper, we consider a factorized mobility matrix B=JJT and focus on the nontrivial fundamentals of an efficient algorithm for updating the noise multiplier J . The new algorithm requires O(n2) multiplications per time step instead of the O(n3) multiplications in the original scheme due to Choleski decomposition. In a recursive form, the update scheme circumvents matrix storage and enables limited-memory implementation, in the spirit of the well-known limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method, allowing for a further reduction of the computational effort to O(n) . We analyze in detail the performance of the factorized (FSU) and limited-memory (L-FSU) algorithms in terms of convergence and (multiscale) sampling, for an elementary but relevant system that involves multiple time and length scales. Finally, we use this analysis to formulate conditions for the simulation of the complex high-dimensional potential energy landscapes of interest.
Francini, Andrea
2013-05-14
An advance is made over the prior art in accordance with the principles of the present invention that is directed to a new approach for a system and method for a buffer management scheme called Periodic Early Discard (PED). The invention builds on the observation that, in presence of TCP traffic, the length of a queue can be stabilized by selection of an appropriate frequency for packet dropping. For any combination of number of TCP connections and distribution of the respective RTT values, there exists an ideal packet drop frequency that prevents the queue from over-flowing or under-flowing. While the value of the ideal packet drop frequency may quickly change over time and is sensitive to the series of TCP connections affected by past packet losses, and most of all is impossible to compute inline, it is possible to approximate it with a margin of error that allows keeping the queue occupancy within a pre-defined range for extended periods of time. The PED scheme aims at tracking the (unknown) ideal packet drop frequency, adjusting the approximated value based on the evolution of the queue occupancy, with corrections of the approximated packet drop frequency that occur at a timescale that is comparable to the aggregate time constant of the set of TCP connections that traverse the queue.
Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control
Loft, Shayne; Smith, Rebekah E.; Remington, Roger
2015-01-01
Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825
JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays
Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.
2004-01-01
JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.
Large memory capacity in chaotic artificial neural networks: a view of the anti-integrable limit.
Lin, Wei; Chen, Guanrong
2009-08-01
In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.
An effective and secure key-management scheme for hierarchical access control in E-medicine system.
Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit
2013-04-01
Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.
Memories for life: a review of the science and technology
O'Hara, Kieron; Morris, Richard; Shadbolt, Nigel; Hitch, Graham J; Hall, Wendy; Beagrie, Neil
2006-01-01
This paper discusses scientific, social and technological aspects of memory. Recent developments in our understanding of memory processes and mechanisms, and their digital implementation, have placed the encoding, storage, management and retrieval of information at the forefront of several fields of research. At the same time, the divisions between the biological, physical and the digital worlds seem to be dissolving. Hence, opportunities for interdisciplinary research into memory are being created, between the life sciences, social sciences and physical sciences. Such research may benefit from immediate application into information management technology as a testbed. The paper describes one initiative, memories for life, as a potential common problem space for the various interested disciplines. PMID:16849265
Efficient Secure and Privacy-Preserving Route Reporting Scheme for VANETs
NASA Astrophysics Data System (ADS)
Zhang, Yuanfei; Pei, Qianwen; Dai, Feifei; Zhang, Lei
2017-10-01
Vehicular ad-hoc network (VANET) is a core component of intelligent traffic management system which could provide various of applications such as accident prediction, route reporting, etc. Due to the problems caused by traffic congestion, route reporting becomes a prospective application which can help a driver to get optimal route to save her travel time. Before enjoying the convenience of route reporting, security and privacy-preserving issues need to be concerned. In this paper, we propose a new secure and privacy-preserving route reporting scheme for VANETs. In our scheme, only an authenticated vehicle can use the route reporting service provided by the traffic management center. Further, a vehicle may receive the response from the traffic management center with low latency and without violating the privacy of the vehicle. Experiment results show that our scheme is much more efficiency than the existing one.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.
Market behavior and performance of different strategy evaluation schemes
NASA Astrophysics Data System (ADS)
Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong
2010-08-01
Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents’ strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme’s success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.
Impact of subsidies on cancer genetic testing uptake in Singapore.
Li, Shao-Tzu; Yuen, Jeanette; Zhou, Ke; Binte Ishak, Nur Diana; Chen, Yanni; Met-Domestici, Marie; Chan, Sock Hoai; Tan, Yee Pin; Allen, John Carson; Lim, Soon Thye; Soo, Khee Chee; Ngeow, Joanne
2017-04-01
Previous reports cite high costs of clinical cancer genetic testing as main barriers to patient's willingness to test. We report findings of a pilot study that evaluates how different subsidy schemes impact genetic testing uptake and total cost of cancer management. We included all patients who attended the Cancer Genetics Service at the National Cancer Centre Singapore (January 2014-May 2016). Two subsidy schemes, the blanket scheme (100% subsidy to all eligible patients), and the varied scheme (patients received 50%-100% subsidy dependent on financial status) were compared. We estimated total spending on cancer management from government's perspective using a decision model. 445 patients were included. Contrasting against the blanket scheme, the varied scheme observed a higher attendance of patients (34 vs 8 patients per month), of which a higher proportion underwent genetic testing (5% vs 38%), while lowering subsidy spending per person (S$1098 vs S$1161). The varied scheme may potentially save cost by reducing unnecessary cancer surveillance when first-degree relatives uptake rate is above 36%. Provision of subsidy leads to a considerable increase in genetic testing uptake rate. From the government's perspective, subsidising genetic testing may potentially reduce total costs on cancer management. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Fast quantum Monte Carlo on a GPU
NASA Astrophysics Data System (ADS)
Lutsyshyn, Y.
2015-02-01
We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.
Control of photon storage time using phase locking.
Ham, Byoung S
2010-01-18
A photon echo storage-time extension protocol is presented by using a phase locking method in a three-level backward propagation scheme, where phase locking serves as a conditional stopper of the rephasing process in conventional two-pulse photon echoes. The backward propagation scheme solves the critical problems of extremely low retrieval efficiency and pi rephasing pulse-caused spontaneous emission noise in photon echo based quantum memories. The physics of the storage time extension lies in the imminent population transfer from the excited state to an auxiliary spin state by a phase locking control pulse. We numerically demonstrate that the storage time is lengthened by spin dephasing time.
Analysis of the cable equation with non-local and non-singular kernel fractional derivative
NASA Astrophysics Data System (ADS)
Karaagac, Berat
2018-02-01
Recently a new concept of differentiation was introduced in the literature where the kernel was converted from non-local singular to non-local and non-singular. One of the great advantages of this new kernel is its ability to portray fading memory and also well defined memory of the system under investigation. In this paper the cable equation which is used to develop mathematical models of signal decay in submarine or underwater telegraphic cables will be analysed using the Atangana-Baleanu fractional derivative due to the ability of the new fractional derivative to describe non-local fading memory. The existence and uniqueness of the more generalized model is presented in detail via the fixed point theorem. A new numerical scheme is used to solve the new equation. In addition, stability, convergence and numerical simulations are presented.
Adiabatic passage in photon-echo quantum memories
NASA Astrophysics Data System (ADS)
Demeter, Gabor
2013-11-01
Photon-echo-based quantum memories use inhomogeneously broadened, optically thick ensembles of absorbers to store a weak optical signal and employ various protocols to rephase the atomic coherences for information retrieval. We study the application of two consecutive, frequency-chirped control pulses for coherence rephasing in an ensemble with a “natural” inhomogeneous broadening. Although propagation effects distort the two control pulses differently, chirped pulses that drive adiabatic passage can rephase atomic coherences in an optically thick storage medium. Combined with spatial phase-mismatching techniques to prevent primary echo emission, coherences can be rephased around the ground state to achieve secondary echo emission with close to unit efficiency. Potential advantages over similar schemes working with π pulses include greater potential signal fidelity, reduced noise due to spontaneous emission, and better capability for the storage of multiple memory channels.
Sheng, Weitian; Zhou, Chenming; Liu, Yang; Bagci, Hakan; Michielssen, Eric
2018-01-01
A fast and memory efficient three-dimensional full-wave simulator for analyzing electromagnetic (EM) wave propagation in electrically large and realistic mine tunnels/galleries loaded with conductors is proposed. The simulator relies on Muller and combined field surface integral equations (SIEs) to account for scattering from mine walls and conductors, respectively. During the iterative solution of the system of SIEs, the simulator uses a fast multipole method-fast Fourier transform (FMM-FFT) scheme to reduce CPU and memory requirements. The memory requirement is further reduced by compressing large data structures via singular value and Tucker decompositions. The efficiency, accuracy, and real-world applicability of the simulator are demonstrated through characterization of EM wave propagation in electrically large mine tunnels/galleries loaded with conducting cables and mine carts. PMID:29726545
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Memory feedback PID control for exponential synchronisation of chaotic Lur'e systems
NASA Astrophysics Data System (ADS)
Zhang, Ruimei; Zeng, Deqiang; Zhong, Shouming; Shi, Kaibo
2017-09-01
This paper studies the problem of exponential synchronisation of chaotic Lur'e systems (CLSs) via memory feedback proportional-integral-derivative (PID) control scheme. First, a novel augmented Lyapunov-Krasovskii functional (LKF) is constructed, which can make full use of the information on time delay and activation function. Second, improved synchronisation criteria are obtained by using new integral inequalities, which can provide much tighter bounds than what the existing integral inequalities can produce. In comparison with existing results, in which only proportional control or proportional derivative (PD) control is used, less conservative results are derived for CLSs by PID control. Third, the desired memory feedback controllers are designed in terms of the solution to linear matrix inequalities. Finally, numerical simulations of Chua's circuit and neural network are provided to show the effectiveness and advantages of the proposed results.
NASA Astrophysics Data System (ADS)
Suzuki, Yosuke; Ebina, Kuniyoshi; Tanaka, Shigenori
2016-08-01
A computational scheme to describe the coherent dynamics of excitation energy transfer (EET) in molecular systems is proposed on the basis of generalized master equations with memory kernels. This formalism takes into account those physical effects in electron-bath coupling system such as the spin symmetry of excitons, the inelastic electron tunneling and the quantum features of nuclear motions, thus providing a theoretical framework to perform an ab initio description of EET through molecular simulations for evaluating the spectral density and the temporal correlation function of electronic coupling. Some test calculations have then been carried out to investigate the dependence of exciton population dynamics on coherence memory, inelastic tunneling correlation time, magnitude of electronic coupling, quantum correction to temporal correlation function, reorganization energy and energy gap.
Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989
NASA Technical Reports Server (NTRS)
Raugh, Michael R. (Editor)
1989-01-01
Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis.
Rudasingwa, Martin; Soeters, Robert; Bossuyt, Michel
2015-01-01
To strengthen the health care delivery, the Burundian Government in collaboration with international NGOs piloted performance-based financing (PBF) in 2006. The health facilities were assigned - by using a simple matching method - to begin PBF scheme or to continue with the traditional input-based funding. Our objective was to analyse the effect of that PBF scheme on the quality of health services between 2006 and 2008. We conducted the analysis in 16 health facilities with PBF scheme and 13 health facilities without PBF scheme. We analysed the PBF effect by using 58 composite quality indicators of eight health services: Care management, outpatient care, maternity care, prenatal care, family planning, laboratory services, medicines management and materials management. The differences in quality improvement in the two groups of health facilities were performed applying descriptive statistics, a paired non-parametric Wilcoxon Signed Ranks test and a simple difference-in-difference approach at a significance level of 5%. We found an improvement of the quality of care in the PBF group and a significant deterioration in the non-PBF group in the same four health services: care management, outpatient care, maternity care, and prenatal care. The findings suggest a PBF effect of between 38 and 66 percentage points (p<0.001) in the quality scores of care management, outpatient care, prenatal care, and maternal care. We found no PBF effect on clinical support services: laboratory services, medicines management, and material management. The PBF scheme in Burundi contributed to the improvement of the health services that were strongly under the control of medical personnel (physicians and nurses) in a short time of two years. The clinical support services that did not significantly improved were strongly under the control of laboratory technicians, pharmacists and non-medical personnel. PMID:25948432
Akama-Garren, Elliot H.; Bianchi, Matt T.; Leveroni, Catherine; Cole, Andrew J.; Cash, Sydney S.; Westover, M. Brandon
2016-01-01
SUMMARY Objectives Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. Methods We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. Results For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Significance Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. PMID:25244498
Akama-Garren, Elliot H; Bianchi, Matt T; Leveroni, Catherine; Cole, Andrew J; Cash, Sydney S; Westover, M Brandon
2014-11-01
Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.
Memory Management of Multimedia Services in Smart Homes
NASA Astrophysics Data System (ADS)
Kamel, Ibrahim; Muhaureq, Sanaa A.
Nowadays there is a wide spectrum of applications that run in smart home environments. Consequently, home gateway, which is a central component in the smart home, must manage many applications despite limited memory resources. OSGi is a middleware standard for home gateways. OSGi models services as dependent components. Moreover, these applications might differ in their importance. Services collaborate and complement each other to achieve the required results. This paper addresses the following problem: given a home gateway that hosts several applications with different priorities and arbitrary dependencies among them. When the gateway runs out of memory, which application or service will be stopped or kicked out of memory to start a new service. Note that stopping a given service means that all the services that depend on it will be stopped too. Because of the service dependencies, traditional memory management techniques, in the operating system literatures might not be efficient. Our goal is to stop the least important and the least number of services. The paper presents a novel algorithm for home gateway memory management. The proposed algorithm takes into consideration the priority of the application and dependencies between different services, in addition to the amount of memory occupied by each service. We implement the proposed algorithm and performed many experiments to evaluate its performance and execution time. The proposed algorithm is implemented as a part of the OSGi framework (Open Service Gateway initiative). We used best fit and worst fit as yardstick to show the effectiveness of the proposed algorithm.
Appropriate technology for domestic wastewater management in under-resourced regions of the world
NASA Astrophysics Data System (ADS)
Oladoja, Nurudeen Abiola
2017-11-01
Centralized wastewater management system is the modern day waste management practice, but the high cost and stringent requirements for the construction and operation have made it less attractive in the under-resourced regions of the world. Considering these challenges, the use of decentralized wastewater management system, on-site treatment system, as an appropriate technology for domestic wastewater treatment is hereby advocated. Adopting this technology helps save money, protects home owners' investment, promotes better watershed management, offers an appropriate solution for low-density communities, provides suitable alternatives for varying site conditions and furnishes effective solutions for ecologically sensitive areas. In the light of this, an overview of the on-site treatment scheme, at the laboratory scale, pilot study stage, and field trials was conducted to highlight the operational principles' strength and shortcomings of the scheme. The operational requirements for the establishing and operation of the scheme and best management practice to enhance the performance and sustenance were proffered.
NMRPipe: a multidimensional spectral processing system based on UNIX pipes.
Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A
1995-11-01
The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.
Demonstration of spatial-light-modulation-based four-wave mixing in cold atoms
NASA Astrophysics Data System (ADS)
Juo, Jz-Yuan; Lin, Jia-Kang; Cheng, Chin-Yao; Liu, Zi-Yu; Yu, Ite A.; Chen, Yong-Fan
2018-05-01
Long-distance quantum optical communications usually require efficient wave-mixing processes to convert the wavelengths of single photons. Many quantum applications based on electromagnetically induced transparency (EIT) have been proposed and demonstrated at the single-photon level, such as quantum memories, all-optical transistors, and cross-phase modulations. However, EIT-based four-wave mixing (FWM) in a resonant double-Λ configuration has a maximum conversion efficiency (CE) of 25% because of absorptive loss due to spontaneous emission. An improved scheme using spatially modulated intensities of two control fields has been theoretically proposed to overcome this conversion limit. In this study, we first demonstrate wavelength conversion from 780 to 795 nm with a 43% CE by using this scheme at an optical density (OD) of 19 in cold 87Rb atoms. According to the theoretical model, the CE in the proposed scheme can further increase to 96% at an OD of 240 under ideal conditions, thereby attaining an identical CE to that of the previous nonresonant double-Λ scheme at half the OD. This spatial-light-modulation-based FWM scheme can achieve a near-unity CE, thus providing an easy method of implementing an efficient quantum wavelength converter for all-optical quantum information processing.
Quantum pattern recognition with multi-neuron interactions
NASA Astrophysics Data System (ADS)
Fard, E. Rezaei; Aghayar, K.; Amniat-Talab, M.
2018-03-01
We present a quantum neural network with multi-neuron interactions for pattern recognition tasks by a combination of extended classic Hopfield network and adiabatic quantum computation. This scheme can be used as an associative memory to retrieve partial patterns with any number of unknown bits. Also, we propose a preprocessing approach to classifying the pattern space S to suppress spurious patterns. The results of pattern clustering show that for pattern association, the number of weights (η ) should equal the numbers of unknown bits in the input pattern ( d). It is also remarkable that associative memory function depends on the location of unknown bits apart from the d and load parameter α.
Devolved School Management in Tayside Region. Research Report Series.
ERIC Educational Resources Information Center
Wilson, Valerie; And Others
This report contains findings of an evaluation of the first phase of Tayside Region's (Scotland) Devolved School Management (DSM) scheme. The evaluation sought to evaluate the first phase of implementation and to suggest ways in which the scheme and accompanying training might be improved. Sixty schools chose to participate in the first phase,…
Yu, Si; Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min
2014-01-01
Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much.
Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min
2014-01-01
Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much. PMID:24688434
Water management challenges at Mushandike irrigation scheme in Runde catchment, Zimbabwe
NASA Astrophysics Data System (ADS)
Malanco, Jose A.; Makurira, Hodson; Kaseke, Evans; Gumindoga, Webster
2018-05-01
Mushandike Irrigation Scheme, constructed in 1939, is located in Masvingo District and is one of the oldest irrigation schemes in Zimbabwe. Since 2002, the scheme has experienced severe water shortages resulting in poor crop yields. The low crop yields have led to loss of income to the smallholder farmers who constitute the irrigation scheme leading to water conflicts. The water stress at the scheme has been largely attributed to climate change and the uncontrolled expansion of the land under irrigation which is currently about 1000 ha against a design area of 613 ha. This study sought to determine the actual causes of water shortage at Mushandike Irrigation Scheme. Hydro-climatic data was analysed to establish if the Mushandike River system generates enough water to guarantee the calculated annual yield of the dam. Irrigation demands and efficiencies were compared against water availability and dam releases to establish if there is any deficit. The Spearman's Rank Correlation results of 0.196 for rainfall and 0.48 for evaporation confirmed positive but insignificant long-term changes in hydro-climatic conditions in the catchment. Water budgets established that the yield of the dam of 9.2 × 106 m3 year-1 is sufficient to support the expanded area of 1000 ha provided in-field water management efficiencies are adopted. The study concludes that water shortages currently experienced at the scheme are a result of inefficient water management (e.g. over-abstraction from the dam beyond the firm yield, adoption of inefficient irrigation methods and high channel losses in the canal system) and are not related to hydro-climatic conditions. The study also sees no value in considering inter-basin water transfer to cushion the losses being experienced at the scheme.
NASA Technical Reports Server (NTRS)
Ansari, Nirwan; Liu, Dequan
1991-01-01
A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.
2014-09-30
Mental Domain = Ω Goal Management goal change goal input World =Ψ Memory Mission & Goals( ) World Model (-Ψ) Episodic Memory Semantic Memory ...Activations Trace Meta-Level Control Introspective Monitoring Memory Reasoning Trace ( ) Strategies Episodic Memory Metaknowledge Self Model...it is from incorrect or missing memory associations (i.e., indices). Similarly, correct information may exist in the input stream, but may not be
Matching soil salinization and cropping systems in communally managed irrigation schemes
NASA Astrophysics Data System (ADS)
Malota, Mphatso; Mchenga, Joshua
2018-03-01
Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi < 2 dS/m), the irrigation water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.
Woods, Steven Paul; Weinborn, Michael; Maxwell, Brenton R.; Gummery, Alice; Mo, Kevin; Ng, Amanda R. J.; Bucks, Romola S.
2014-01-01
Background Identifying potentially modifiable risk factors for medication non-adherence in older adults is important in order to enhance screening and intervention efforts designed to improve medication-taking behavior and health outcomes. The current study sought to determine the unique contribution of prospective memory (i.e., “remembering to remember”) to successful self-reported medication management in older adults. Methods Sixty-five older adults with current medication prescriptions completed a comprehensive research evaluation of sociodemographic, psychiatric, and neurocognitive functioning, which included the Memory for Adherence to Medication Scale (MAMS), Prospective and Retrospective Memory Questionnaire (PRMQ), and a performance-based measure of prospective memory that measured both semantically-related and semantically-unrelated cue-intention (i.e., when-what) pairings. Results A series of hierarchical regressions controlling for biopsychosocial, other neurocognitive, and medication-related factors showed that elevated complaints on the PM scale of the PRMQ and worse performance on an objective semantically-unrelated event-based prospective memory task were independent predictors of poorer medication adherence as measured by the MAMS. Conclusions Prospective memory plays an important role in self-report of successful medication management among older adults. Findings may have implications for screening for older individuals “at risk” of non-adherence, as well as the development of prospective memory-based interventions to improve medication adherence and, ultimately, long-term health outcomes in older adults. PMID:24410357
A Structure Memory for Data Flow Computers
1977-09-01
with a FET+ before the result is sent to the destination cells. If one of those cells is a SELECT that issues a FET- to reduce the refe- ence count, the...it.in a lAD packet through lADO . Since a reference count scheme is used for recovering unused cells, the controller watches for words whose reference
Shark: Fast Data Analysis Using Coarse-grained Distributed Memory
2013-05-01
Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 7.1.1 Java Objects...often MySQL or Derby) with a namespace for tables, table metadata, and par- tition information. Table data is stored in an HDFS directory, while a...saving time and space for large data sets. This is achieved with support for custom SerDe (serialization/deserialization) java interface implementations
A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework.
ERIC Educational Resources Information Center
Hill, David R.
1978-01-01
A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. This program makes synthesis possible with less rigid time and frequency-component structure than simpler schemes. It also meets real-time operation and memory-size…
Improving Working Memory Efficiency by Reframing Metacognitive Interpretation of Task Difficulty
ERIC Educational Resources Information Center
Autin, Frederique; Croizet, Jean-Claude
2012-01-01
Working memory capacity, our ability to manage incoming information for processing purposes, predicts achievement on a wide range of intellectual abilities. Three randomized experiments (N = 310) tested the effectiveness of a brief psychological intervention designed to boost working memory efficiency (i.e., state working memory capacity) by…
File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.-S.
1987-01-01
A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Predictability of process resource usage - A measurement-based study on UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1989-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
NASA Astrophysics Data System (ADS)
Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung
2015-04-01
Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.
Predictability of process resource usage: A measurement-based study of UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1987-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
NASA Astrophysics Data System (ADS)
Li, Zhong-sheng; Bai, Chao-ying; Sun, Yao-chong
2013-08-01
In this paper, we use the staggered grid, the auxiliary grid, the rotated staggered grid and the non-staggered grid finite-difference methods to simulate the wavefield propagation in 2D elastic tilted transversely isotropic (TTI) and viscoelastic TTI media, respectively. Under the stability conditions, we choose different spatial and temporal intervals to get wavefront snapshots and synthetic seismograms to compare the four algorithms in terms of computational accuracy, CPU time, phase shift, frequency dispersion and amplitude preservation. The numerical results show that: (1) the rotated staggered grid scheme has the least memory cost and the fastest running speed; (2) the non-staggered grid scheme has the highest computational accuracy and least phase shift; (3) the staggered grid has less frequency dispersion even when the spatial interval becomes larger.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems
NASA Astrophysics Data System (ADS)
Leuschner, Matthias; Fritzen, Felix
2017-11-01
Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.
NASA Astrophysics Data System (ADS)
Hegde, Ganapathi; Vaya, Pukhraj
2013-10-01
This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.
A Survey of Knowledge Management Research & Development at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Clancy, Daniel (Technical Monitor)
2002-01-01
This chapter catalogs knowledge management research and development activities at NASA Ames Research Center as of April 2002. A general categorization scheme for knowledge management systems is first introduced. This categorization scheme divides knowledge management capabilities into five broad categories: knowledge capture, knowledge preservation, knowledge augmentation, knowledge dissemination, and knowledge infrastructure. Each of nearly 30 knowledge management systems developed at Ames is then classified according to this system. Finally, a capsule description of each system is presented along with information on deployment status, funding sources, contact information, and both published and internet-based references.
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Pakzad, Farzaneh; Asadinia, Sanaz
In recent years, one of the wireless technologies increased tremendously is mobile ad hoc networks (MANETs) in which mobile nodes organize themselves without the help of any predefined infrastructure. MANETs are highly vulnerable to attack due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring, management point and lack of a clear defense line. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. In our proposed scheme, the network with distributed hierarchical architecture is partitioned into zones, so that in each of them there is one zone manager. The zone manager is responsible for monitoring the cluster heads in its zone and cluster heads are in charge of monitoring their members. However, the most important problem is how the trustworthiness of the zone manager can be recognized. So, we propose a scheme in which "honest neighbors" of zone manager specify the validation of their zone manager. These honest neighbors prevent false accusations and also allow manager if it is wrongly misbehaving. However, if the manger repeats its misbehavior, then it will lose its management degree. Therefore, our scheme will be improved intrusion detection and also provide a more reliable network.
NASA Astrophysics Data System (ADS)
Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al
2018-03-01
One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.
Ludmer, Rachel; Edelson, Micah G; Dudai, Yadin
2015-02-01
Flexible mnemonic mechanisms that adjust to different internal mental states can provide a major adaptive advantage. However, little is known regarding how this flexibility is achieved in the human brain. We examined brain activity during retrieval of false memories of a movie, generated by exposing participants to misleading information. Half of the participants suspected the memory manipulation (Distrustful), whereas the other half did not (Naïve). Distrustful displayed more accurate memory performance and a brain signature different than that of Naïve. In Distrustful, the ability to differentiate true from false information was driven by a qualitatively distinct hippocampal activity for endorsed items, consistent with the view that hippocampal encoding allows recollection of a specific source. Conversely, in Naïve, BOLD differences between true and false memories were linearly correlated with accuracy across participants, suggesting that Naïve subjects needed to reinstate and evaluate stored information to discern true from false. We propose that our results lend support to models suggesting that hippocampal activity can exhibit different computational schemes, depending on memorandum attributes. Furthermore, we show that trust, considered as a subjective state of mind, may alter basic hippocampal strategies, influencing the ability to separate real from false memory. © 2014 Wiley Periodicals, Inc.
Investigation and design of a Project Management Decision Support System for the 4950th Test Wing.
1986-03-01
all decision makers is the need for memory aids (reports, hand written notes, mental memory joggers, etc.). 4. Even in similar decision making ... memories to synthesize a decision- making process based on their individual styles, skills, and knowledge (Sprague, 1982: 106). Control mechanisms...representations shown in Figures 4.9 and 4.10 provide a means to this objective. By enabling a manager to make and record reasonable changes to
Effective 3-D surface modeling for geographic information systems
NASA Astrophysics Data System (ADS)
Yüksek, K.; Alparslan, M.; Mendi, E.
2013-11-01
In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.
Effective 3-D surface modeling for geographic information systems
NASA Astrophysics Data System (ADS)
Yüksek, K.; Alparslan, M.; Mendi, E.
2016-01-01
In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.
Configurable memory system and method for providing atomic counting operations in a memory device
Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin
2010-09-14
A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.
Simulation of the Australian Mobilesat signalling scheme
NASA Technical Reports Server (NTRS)
Rahman, Mushfiqur
1990-01-01
The proposed Australian Mobilesat system will provide a range of circuit switched voice/data services using the B-series satellites. The reliability of the signalling scheme between the Network Management Station (NMS) and the mobile terminal (MT) is of critical importance to the performance of the overall system. Simulation results of the performance of the signalling scheme under various channel conditions and coding schemes are presented.
Defence Technology Strategy for the Demands of the 21st Century
2006-10-01
understanding of human capability in the CBM role. Ownership of the intellectual property behind algorithms may be sovereign10, but implementation will...synchronisation schemes. · coding schemes. · modulation techniques. · access schemes. · smart spectrum usage . · low probability of intercept. · implementation...modulation techniques; access schemes; smart spectrum usage ; low probability of intercept Spectrum and bandwidth management · cross layer technologies to
Massively parallel support for a case-based planning system
NASA Technical Reports Server (NTRS)
Kettler, Brian P.; Hendler, James A.; Anderson, William A.
1993-01-01
Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.
Generalization Through the Recurrent Interaction of Episodic Memories
Kumaran, Dharshan; McClelland, James L.
2012-01-01
In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus in pattern separation (Marr, 1971; McClelland, McNaughton, & O'Reilly, 1995), and empirical support for its role in generalization and flexible relational memory (Cohen & Eichenbaum, 1993; Eichenbaum, 1999). Our account provides a means by which to resolve this conflict, by demonstrating that the basic representational scheme envisioned by complementary learning systems theory (McClelland et al., 1995), which relies upon orthogonalized codes in the hippocampus, is compatible with efficient generalization—as long as there is recurrence rather than unidirectional flow within the hippocampal circuit or, more widely, between the hippocampus and neocortex. We propose that recurrent similarity computation, a process that facilitates the discovery of higher-order relationships between a set of related experiences, expands the scope of classical exemplar-based models of memory (e.g., Nosofsky, 1984) and allows the hippocampus to support generalization through interactions that unfold within a dynamically created memory space. PMID:22775499
The Effect of Funding Scheme on the Performance of Navy Repair Activities
2005-03-01
and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project... managed to wedge in a thesis review along with his fiscal year end frenzy. The author would also like to thank his wonderful wife, Rita, for the extra...a funding scheme called Navy Working Capital Fund while others in the same region were under the Resource Management System (also referred to as
Development of Next Generation Memory Test Experiment for Deployment on a Small Satellite
NASA Technical Reports Server (NTRS)
MacLeod, Todd; Ho, Fat D.
2012-01-01
The original Memory Test Experiment successfully flew on the FASTSAT satellite launched in November 2010. It contained a single Ramtron 512K ferroelectric memory. The memory device went through many thousands of read/write cycles and recorded any errors that were encountered. The original mission length was schedule to last 6 months but was extended to 18 months. New opportunities exist to launch a similar satellite and considerations for a new memory test experiment should be examined. The original experiment had to be designed and integrated in less than two months, so the experiment was a simple design using readily available parts. The follow-on experiment needs to be more sophisticated and encompass more technologies. This paper lays out the considerations for the design and development of this follow-on flight memory experiment. It also details the results from the original Memory Test Experiment that flew on board FASTSAT. Some of the design considerations for the new experiment include the number and type of memory devices to be used, the kinds of tests that will be performed, other data needed to analyze the results, and best use of limited resources on a small satellite. The memory technologies that are considered are FRAM, FLASH, SONOS, Resistive Memory, Phase Change Memory, Nano-wire Memory, Magneto-resistive Memory, Standard DRAM, and Standard SRAM. The kinds of tests that could be performed are read/write operations, non-volatile memory retention, write cycle endurance, power measurements, and testing Error Detection and Correction schemes. Other data that may help analyze the results are GPS location of recorded errors, time stamp of all data recorded, radiation measurements, temperature, and other activities being perform by the satellite. The resources of power, volume, mass, temperature, processing power, and telemetry bandwidth are extremely limited on a small satellite. Design considerations must be made to allow the experiment to not interfere with the satellite s primary mission.
Chen, Mingchen; Lin, Xingcheng; Zheng, Weihua; Onuchic, José N; Wolynes, Peter G
2016-08-25
The associative memory, water mediated, structure and energy model (AWSEM) is a coarse-grained force field with transferable tertiary interactions that incorporates local in sequence energetic biases using bioinformatically derived structural information about peptide fragments with locally similar sequences that we call memories. The memory information from the protein data bank (PDB) database guides proper protein folding. The structural information about available sequences in the database varies in quality and can sometimes lead to frustrated free energy landscapes locally. One way out of this difficulty is to construct the input fragment memory information from all-atom simulations of portions of the complete polypeptide chain. In this paper, we investigate this approach first put forward by Kwac and Wolynes in a more complete way by studying the structure prediction capabilities of this approach for six α-helical proteins. This scheme which we call the atomistic associative memory, water mediated, structure and energy model (AAWSEM) amounts to an ab initio protein structure prediction method that starts from the ground up without using bioinformatic input. The free energy profiles from AAWSEM show that atomistic fragment memories are sufficient to guide the correct folding when tertiary forces are included. AAWSEM combines the efficiency of coarse-grained simulations on the full protein level with the local structural accuracy achievable from all-atom simulations of only parts of a large protein. The results suggest that a hybrid use of atomistic fragment memory and database memory in structural predictions may well be optimal for many practical applications.
Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.
Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui
2017-05-25
Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.
Crew exploration vehicle (CEV) attitude control using a neural-immunology/memory network
NASA Astrophysics Data System (ADS)
Weng, Liguo; Xia, Min; Wang, Wei; Liu, Qingshan
2015-01-01
This paper addresses the problem of the crew exploration vehicle (CEV) attitude control. CEVs are NASA's next-generation human spaceflight vehicles, and they use reaction control system (RCS) jet engines for attitude adjustment, which calls for control algorithms for firing the small propulsion engines mounted on vehicles. In this work, the resultant CEV dynamics combines both actuation and attitude dynamics. Therefore, it is highly nonlinear and even coupled with significant uncertainties. To cope with this situation, a neural-immunology/memory network is proposed. It is inspired by the human memory and immune systems. The control network does not rely on precise system dynamics information. Furthermore, the overall control scheme has a simple structure and demands much less computation as compared with most existing methods, making it attractive for real-time implementation. The effectiveness of this approach is also verified via simulation.
Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; ...
2016-07-06
Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantummore » regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. As a result, the proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems.« less
78 FR 23866 - Airworthiness Directives; the Boeing Company
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-23
... operational software in the cabin management system, and loading new software into the mass memory card. The...-200 and -300 series airplanes. The proposed AD would have required installing new operational software in the cabin management system, and loading new software into the mass memory card. Since the...
Hemiboreal forest: natural disturbances and the importance of ecosystem legacies to management
Kalev Jogiste; Henn Korjus; John Stanturf; Lee E. Frelich; Endijs Baders; Janis Donis; Aris Jansons; Ahto Kangur; Kajar Koster; Diana Laarmann; Tiit Maaten; Vitas Marozas; Marek Metslaid; Kristi Nigul; Olga Polyachenko; Tiit Randveer; Floortje Vodde
2017-01-01
The condition of forest ecosystems depends on the temporal and spatial pattern of management interventions and natural disturbances. Remnants of previous conditions persisting after disturbances, or ecosystem legacies, collectively comprise ecosystem memory. Ecosystem memory in turn contributes to resilience and possibilities of ecosystem reorganization...
Low-power chip-level optical interconnects based on bulk-silicon single-chip photonic transceivers
NASA Astrophysics Data System (ADS)
Kim, Gyungock; Park, Hyundai; Joo, Jiho; Jang, Ki-Seok; Kwack, Myung-Joon; Kim, Sanghoon; Kim, In Gyoo; Kim, Sun Ae; Oh, Jin Hyuk; Park, Jaegyu; Kim, Sanggi
2016-03-01
We present new scheme for chip-level photonic I/Os, based on monolithically integrated vertical photonic devices on bulk silicon, which increases the integration level of PICs to a complete photonic transceiver (TRx) including chip-level light source. A prototype of the single-chip photonic TRx based on a bulk silicon substrate demonstrated 20 Gb/s low power chip-level optical interconnects between fabricated chips, proving that this scheme can offer compact low-cost chip-level I/O solutions and have a significant impact on practical electronic-photonic integration in high performance computers (HPC), cpu-memory interface, 3D-IC, and LAN/SAN/data-center and network applications.
An effective write policy for software coherence schemes
NASA Technical Reports Server (NTRS)
Chen, Yung-Chin; Veidenbaum, Alexander V.
1992-01-01
The authors study the write behavior and evaluate the performance of various write strategies and buffering techniques for a MIN-based multiprocessor system using the simple software coherence scheme. Hit ratios, memory latencies, total execution time, and total write traffic are used as the performance indices. The write-through write-allocate no-fetch cache using a write-back write buffer is shown to have a better performance than both write-through and write-back caches. This type of write buffer is effective in reducing the volume as well as bursts of write traffic. On average, the use of a write-back cache reduces by 60 percent the total write traffic generated by a write-through cache.
Numerical simulation of three dimensional transonic flows
NASA Technical Reports Server (NTRS)
Sahu, Jubaraj; Steger, Joseph L.
1987-01-01
The three-dimensional flow over a projectile has been computed using an implicit, approximately factored, partially flux-split algorithm. A simple composite grid scheme has been developed in which a single grid is partitioned into a series of smaller grids for applications which require an external large memory device such as the SSD of the CRAY X-MP/48, or multitasking. The accuracy and stability of the composite grid scheme has been tested by numerically simulating the flow over an ellipsoid at angle of attack and comparing the solution with a single grid solution. The flowfield over a projectile at M = 0.96 and 4 deg angle-of-attack has been computed using a fine grid, and compared with experiment.
A Framework for Simulating Turbine-Based Combined-Cycle Inlet Mode-Transition
NASA Technical Reports Server (NTRS)
Le, Dzu K.; Vrnak, Daniel R.; Slater, John W.; Hessel, Emil O.
2012-01-01
A simulation framework based on the Memory-Mapped-Files technique was created to operate multiple numerical processes in locked time-steps and send I/O data synchronously across to one-another to simulate system-dynamics. This simulation scheme is currently used to study the complex interactions between inlet flow-dynamics, variable-geometry actuation mechanisms, and flow-controls in the transition from the supersonic to hypersonic conditions and vice-versa. A study of Mode-Transition Control for a high-speed inlet wind-tunnel model with this MMF-based framework is presented to illustrate this scheme and demonstrate its usefulness in simulating supersonic and hypersonic inlet dynamics and controls or other types of complex systems.
Parallel solution of high-order numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Milner, Edward J.; Lin, Avi; Liou, May-Fun; Blech, Richard A.
1993-01-01
A new parallel numerical scheme for solving incompressible steady-state flows is presented. The algorithm uses a finite-difference approach to solving the Navier-Stokes equations. The algorithms are scalable and expandable. They may be used with only two processors or with as many processors as are available. The code is general and expandable. Any size grid may be used. Four processors of the NASA LeRC Hypercluster were used to solve for steady-state flow in a driven square cavity. The Hypercluster was configured in a distributed-memory, hypercube-like architecture. By using a 50-by-50 finite-difference solution grid, an efficiency of 74 percent (a speedup of 2.96) was obtained.
Yé, M; Aninanya, G A; Sié, A; Kakoko, D C V; Chatio, S; Kagoné, M; Prytherch, H; Loukanova, S; Williams, J E; Sauerborn, R
2014-01-01
Performance-based incentives (PBIs) are currently receiving attention as a strategy for improving the quality of care that health providers deliver. Experiences from several African countries have shown that PBIs can trigger improvements, particularly in the area of maternal and neonatal health. The involvement of health workers in deciding how their performance should be measured is recommended. Only limited information is available about how such schemes can be made sustainable. This study explored the types of PBIs that rural health workers suggested, their ideas regarding the management and sustainability of such schemes, and their views on which indicators best lend themselves to the monitoring of performance. In this article the authors reported the findings from a cross-country survey conducted in Burkina Faso, Ghana and Tanzania. The study was exploratory with qualitative methodology. In-depth interviews were conducted with 29 maternal and neonatal healthcare providers, four district health managers and two policy makers (total 35 respondents) from one district in each of the three countries. The respondents were purposively selected from six peripheral health facilities. Care was taken to include providers who had a management role. By also including respondents from district and policy level a comparison of perspectives from different levels of the health system was facilitated. The data that was collected was coded and analysed with support of NVivo v8 software. The most frequently suggested PBIs amongst the respondents in Burkina Faso were training with per-diems, bonuses and recognition of work done. The respondents in Tanzania favoured training with per-diems, as well as payment of overtime, and timely promotion. The respondents in Ghana also called for training, including paid study leave, payment of overtime and recognition schemes for health workers or facilities. Respondents in the three countries supported the mobilisation of local resources to make incentive schemes more sustainable. There was a general view that it was easier to integrate the cost of non-financial incentives in local budgets. There were concerns about the fairness of such schemes from the provider level in all three countries. District managers were worried about the workload that would be required to manage the schemes. The providers themselves were less clear about which indicators best lent themselves to the purpose of performance monitoring. District managers and policy makers most commonly suggested indicators that were in line with national maternal and neonatal healthcare indicators. The study showed that health workers have considerable interest in performance-based incentive schemes and are concerned about their sustainability. There is a need to further explore the use of non-financial incentives in PBI schemes, as such incentives were considered to stand a greater chance of being integrated into local budgets. Ensuring participation of healthcare providers in the design of such schemes is likely to achieve buy-in and endorsement from the health workers involved. However, input from managers and policy makers is essential to keep expectations realistic and to ensure the indicators selected fit the purpose and are part of routine reporting systems.
Action versus Result-Oriented Schemes in a Grassland Agroecosystem: A Dynamic Modelling Approach
Sabatier, Rodolphe; Doyen, Luc; Tichit, Muriel
2012-01-01
Effects of agri-environment schemes (AES) on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations. PMID:22496746
NASA Technical Reports Server (NTRS)
Brooner, W. G.; Nichols, D. A.
1972-01-01
Development of a scheme for utilizing remote sensing technology in an operational program for regional land use planning and land resource management program applications. The scheme utilizes remote sensing imagery as one of several potential inputs to derive desired and necessary data, and considers several alternative approaches to the expansion and/or reduction and analysis of data, using automated data handling techniques. Within this scheme is a five-stage program development which includes: (1) preliminary coordination, (2) interpretation and encoding, (3) creation of data base files, (4) data analysis and generation of desired products, and (5) applications.
The simulation of the non-Markovian behaviour of a two-level system
NASA Astrophysics Data System (ADS)
Semina, I.; Petruccione, F.
2016-05-01
Non-Markovian relaxation dynamics of a two-level system is studied with the help of the non-linear stochastic Schrödinger equation with coloured Ornstein-Uhlenbeck noise. This stochastic Schrödinger equation is investigated numerically with an adapted Platen scheme. It is shown, that the memory effects have a significant impact to the dynamics of the system.
ERIC Educational Resources Information Center
Kis, Viktoria
2016-01-01
Realising the potential of work-based learning schemes as a driver of productivity requires careful design and support. The length of work-based learning schemes should be adapted to the profile of productivity gains. A scheme that is too long for a given skill set might be unattractive for learners and waste public resources, but a scheme that is…
Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows
NASA Technical Reports Server (NTRS)
Boretti, A. A.
1990-01-01
Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.
Kim, Gyungock; Park, Hyundai; Joo, Jiho; Jang, Ki-Seok; Kwack, Myung-Joon; Kim, Sanghoon; Gyoo Kim, In; Hyuk Oh, Jin; Ae Kim, Sun; Park, Jaegyu; Kim, Sanggi
2015-01-01
When silicon photonic integrated circuits (PICs), defined for transmitting and receiving optical data, are successfully monolithic-integrated into major silicon electronic chips as chip-level optical I/Os (inputs/outputs), it will bring innovative changes in data computing and communications. Here, we propose new photonic integration scheme, a single-chip optical transceiver based on a monolithic-integrated vertical photonic I/O device set including light source on bulk-silicon. This scheme can solve the major issues which impede practical implementation of silicon-based chip-level optical interconnects. We demonstrated a prototype of a single-chip photonic transceiver with monolithic-integrated vertical-illumination type Ge-on-Si photodetectors and VCSELs-on-Si on the same bulk-silicon substrate operating up to 50 Gb/s and 20 Gb/s, respectively. The prototype realized 20 Gb/s low-power chip-level optical interconnects for λ ~ 850 nm between fabricated chips. This approach can have a significant impact on practical electronic-photonic integration in high performance computers (HPC), cpu-memory interface, hybrid memory cube, and LAN, SAN, data center and network applications. PMID:26061463
Recognition of Telugu characters using neural networks.
Sukhaswami, M B; Seetharamulu, P; Pujari, A K
1995-09-01
The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.
A ripple-spreading genetic algorithm for the aircraft sequencing problem.
Hu, Xiao-Bing; Di Paolo, Ezequiel A
2011-01-01
When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.
A New Proxy Electronic Voting Scheme Achieved by Six-Particle Entangled States
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Ding, Li-Yuan; Jiang, Xiu-Li; Li, Peng-Fei
2018-03-01
In this paper, we use quantum proxy signature to construct a new secret electronic voting scheme. In our scheme, six particles entangled states function as quantum channels. The voter Alice, the Vote Management Center Bob, the scrutineer Charlie only perform two particles measurements on the Bell bases to realize the electronic voting process. So the scheme reduces the technical difficulty and increases operation efficiency. We use quantum key distribution and one-time pad to guarantee its unconditional security. The significant advantage of our scheme is that transmitted information capacity is twice as much as the capacity of other schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Ayan K.; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha
We show that the energy dissipated to write bits in spin-transfer-torque random access memory can be reduced by an order of magnitude if a surface acoustic wave (SAW) is launched underneath the magneto-tunneling junctions (MTJs) storing the bits. The SAW-generated strain rotates the magnetization of every MTJs' soft magnet from the easy towards the hard axis, whereupon passage of a small spin-polarized current through a target MTJ selectively switches it to the desired state with > 99.99% probability at room temperature, thereby writing the bit. The other MTJs return to their original states at the completion of the SAW cycle.
NASA Astrophysics Data System (ADS)
Bullimore, Blaise
2014-10-01
Management of anthropogenic activities that cause pressure on estuarine wildlife and biodiversity is beset by a wide range of challenges. Some, such as the differing environmental and socio-economic objectives and conflicting views and priorities, are common to many estuaries; others are site specific. The Carmarthen Bay and Estuaries European Marine Site encompasses four estuaries of European wildlife and conservation importance and considerable socio-economic value. The estuaries and their wildlife are subject to a range of pressures and threats and the statutory authorities responsible for management in and adjacent to the Site have developed a management scheme to address these. Preparation of the management scheme included an assessment of human activities known to occur in and adjacent to the Site for their potential to cause a threat to the designated habitats and species features, and identified actions the management authorities need to take to minimise or eliminate pressures and threats. To deliver the scheme the partner authorities need to accept the requirement for management actions and work together to achieve them. The Welsh Government also needs to work with these authorities because it is responsible for management of many of the most important pressure-causing activities. However, the absence of statutory obligations for partnership working has proved an impediment to successful management.
BARI+: A Biometric Based Distributed Key Management Approach for Wireless Body Area Networks
Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo
2010-01-01
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain. PMID:22319333
Key management schemes using routing information frames in secure wireless sensor networks
NASA Astrophysics Data System (ADS)
Kamaev, V. A.; Finogeev, A. G.; Finogeev, A. A.; Parygin, D. S.
2017-01-01
The article considers the problems and objectives of key management for data encryption in wireless sensor networks (WSN) of SCADA systems. The structure of the key information in the ZigBee network and methods of keys obtaining are discussed. The use of a hybrid key management schemes is most suitable for WSN. The session symmetric key is used to encrypt the sensor data, asymmetric keys are used to encrypt the session key transmitted from the routing information. Three algorithms of hybrid key management using routing information frames determined by routing methods and the WSN topology are presented.
BARI+: a biometric based distributed key management approach for wireless body area networks.
Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo
2010-01-01
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain.
Digital-Analog Hybrid Scheme and Its Application to Chaotic Random Number Generators
NASA Astrophysics Data System (ADS)
Yuan, Zeshi; Li, Hongtao; Miao, Yunchi; Hu, Wen; Zhu, Xiaohua
2017-12-01
Practical random number generation (RNG) circuits are typically achieved with analog devices or digital approaches. Digital-based techniques, which use field programmable gate array (FPGA) and graphics processing units (GPU) etc. usually have better performances than analog methods as they are programmable, efficient and robust. However, digital realizations suffer from the effect of finite precision. Accordingly, the generated random numbers (RNs) are actually periodic instead of being real random. To tackle this limitation, in this paper we propose a novel digital-analog hybrid scheme that employs the digital unit as the main body, and minimum analog devices to generate physical RNs. Moreover, the possibility of realizing the proposed scheme with only one memory element is discussed. Without loss of generality, we use the capacitor and the memristor along with FPGA to construct the proposed hybrid system, and a chaotic true random number generator (TRNG) circuit is realized, producing physical RNs at a throughput of Gbit/s scale. These RNs successfully pass all the tests in the NIST SP800-22 package, confirming the significance of the scheme in practical applications. In addition, the use of this new scheme is not restricted to RNGs, and it also provides a strategy to solve the effect of finite precision in other digital systems.
A robust and high-performance queue management controller for large round trip time networks
NASA Astrophysics Data System (ADS)
Khoshnevisan, Ladan; Salmasi, Farzad R.
2016-05-01
Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.
Lee, Linda; Weston, W Wayne; Hillier, Loretta; Archibald, Douglas; Lee, Joseph
2018-06-21
Family physicians often find themselves inadequately prepared to manage dementia. This article describes the curriculum for a resident training intervention in Primary Care Collaborative Memory Clinics (PCCMC), outlines its underlying educational principles, and examines its impact on residents' ability to provide dementia care. PCCMCs are family physician-led interprofessional clinic teams that provide evidence-informed comprehensive assessment and management of memory concerns. Within PCCMCs residents learn to apply a structured approach to assessment, diagnosis, and management; training consists of a tutorial covering various topics related to dementia followed by work-based learning within the clinic. Significantly more residents who trained in PCCMCs (sample = 98), as compared to those in usual training programs (sample = 35), reported positive changes in knowledge, ability, and confidence in ability to assess and manage memory problems. The PCCMC training intervention for family medicine residents provides a significant opportunity for residents to learn about best clinical practices and interprofessional care needed for optimal dementia care integrated within primary care practice.
An Investigation of Unified Memory Access Performance in CUDA
Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin
2015-01-01
Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668
NASA Astrophysics Data System (ADS)
Sultana, Tahmina; Takagi, Hiroaki; Morimatsu, Miki; Teramoto, Hiroshi; Li, Chun-Biu; Sako, Yasushi; Komatsuzaki, Tamiki
2013-12-01
We present a novel scheme to extract a multiscale state space network (SSN) from single-molecule time series. The multiscale SSN is a type of hidden Markov model that takes into account both multiple states buried in the measurement and memory effects in the process of the observable whenever they exist. Most biological systems function in a nonstationary manner across multiple timescales. Combined with a recently established nonlinear time series analysis based on information theory, a simple scheme is proposed to deal with the properties of multiscale and nonstationarity for a discrete time series. We derived an explicit analytical expression of the autocorrelation function in terms of the SSN. To demonstrate the potential of our scheme, we investigated single-molecule time series of dissociation and association kinetics between epidermal growth factor receptor (EGFR) on the plasma membrane and its adaptor protein Ash/Grb2 (Grb2) in an in vitro reconstituted system. We found that our formula successfully reproduces their autocorrelation function for a wide range of timescales (up to 3 s), and the underlying SSNs change their topographical structure as a function of the timescale; while the corresponding SSN is simple at the short timescale (0.033-0.1 s), the SSN at the longer timescales (0.1 s to ˜3 s) becomes rather complex in order to capture multiscale nonstationary kinetics emerging at longer timescales. It is also found that visiting the unbound form of the EGFR-Grb2 system approximately resets all information of history or memory of the process.
Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul
2002-07-29
Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less
Huang, Min; Liu, Zhaoqing; Qiao, Liyan
2014-10-10
While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.
Huang, Min; Liu, Zhaoqing; Qiao, Liyan
2014-01-01
While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473
NASA Astrophysics Data System (ADS)
Ohsawa, Takashi; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo
2014-01-01
Array operation currents in spin-transfer-torque magnetic random access memories (STT-MRAMs) that use four differential pair type magnetic tunnel junction (MTJ)-based memory cells (4T2MTJ, two 6T2MTJs and 8T2MTJ) are simulated and compared with that in SRAM. With L3 cache applications in mind, it is assumed that the memories are composed of 32 Mbyte capacity to be accessed in 64 byte in parallel. All the STT-MRAMs except for the 8T2MTJ one are designed with 32 bit fine-grained power gating scheme applied to eliminate static currents in the memory cells that are not accessed. The 8T2MTJ STT-MRAM, the cell’s design concept being not suitable for the fine-grained power gating, loads and saves 32 Mbyte data in 64 Mbyte unit per 1 Mbit sub-array in 2 × 103 cycles. It is shown that the array operation current of the 4T2MTJ STT-MRAM is 70 mA averaged in 15 ns write cycles at Vdd = 0.9 V. This is the smallest among the STT-MRAMs, about the half of the low standby power (LSTP) SRAM whose array operation current is totally dominated by the cells’ subthreshold leakage.
Community-based memorials to September 11, 2001: environmental stewardship as memory work
Erika S. Svendsen; Lindsay K. Campbell
2014-01-01
This chapter investigates how people use trees, parks, gardens, and other natural resources as raw materials in and settings for memorials to September 11, 2001. In particular, we focus on 'found space living memorials', which we define as sites that are community-managed, re-appropriated from their prior use, often carved out of the public right-of-way, and...
Borghi, Josephine; Maluka, Stephen; Kuwawenaruwa, August; Makawia, Suzan; Tantau, Juma; Mtei, Gemini; Ally, Mariam; Macha, Jane
2013-06-13
The National Health Insurance Fund (NHIF), a compulsory formal sector scheme took over the management of the Community Health Fund (CHF), a voluntary informal sector scheme, in 2009. This study assesses the origins of the reform, its effect on management and reporting structures, financial flow adequacy, reform communication and acceptability to key stakeholders, and initial progress towards universal coverage. The study relied on national data sources and an in-depth collective case study of a rural and an urban district to assess awareness and acceptability of the reform, and fund availability and use relative to need in a sample of facilities. The reform was driven by a national desire to expand coverage and increase access to services. Despite initial delays, the CHF has been embedded within the NHIF organisational structure, bringing more intensive and qualified supervision closer to the district. National CHF membership has more than doubled. However, awareness of the reform was limited below the district level due to the reform's top-down nature. The reform was generally acceptable to key stakeholders, who expected that benefits between schemes would be harmonised.The reform was unable to institute changes to the CHF design or district management structures because it has so far been unable to change CHF legislation which also limits facility capacity to use CHF revenue. Further, revenue generated is currently insufficient to offset treatment and administration costs, and the reform did not improve the revenue to cost ratio. Administrative costs are also likely to have increased as a result of the reform. Informal sector schemes can benefit from merger with formal sector schemes through improved data systems, supervision, and management support. However, effects will be maximised if legal frameworks can be harmonised early on and a reduction in administrative costs is not guaranteed.
Lloyd, G C
1996-01-01
Contends that as techniques to motivate, empower and reward staff become ever more sophisticated and expensive, one of the most obvious, though overlooked, ways of tapping the creativity of employees is the suggestion scheme. A staff suggestion scheme may well be dismissed as a simplistic and outdated vehicle by proponents of modern management methods, but to its owners it can be like a classic model--needing just a little care and attention in order for it to run smoothly and at a very low cost. Proposes that readers should spare some time to consider introducing a suggestion scheme as an entry level initiative and a precursor to more sophisticated, elaborate and costly change management mechanisms.
New User Support in the University Network with DACS Scheme
ERIC Educational Resources Information Center
Odagiri, Kazuya; Yaegashi, Rihito; Tadauchi, Masaharu; Ishii, Naohiro
2007-01-01
Purpose: The purpose of this paper is to propose and examine the new user support in university network. Design/methodology/approach: The new user support is realized by use of DACS (Destination Addressing Control System) Scheme which manages a whole network system through communication control on a client computer. This DACS Scheme has been…
FDTD simulation of EM wave propagation in 3-D media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T.; Tripp, A.C.
1996-01-01
A finite-difference, time-domain solution to Maxwell`s equations has been developed for simulating electromagnetic wave propagation in 3-D media. The algorithm allows arbitrary electrical conductivity and permittivity variations within a model. The staggered grid technique of Yee is used to sample the fields. A new optimized second-order difference scheme is designed to approximate the spatial derivatives. Like the conventional fourth-order difference scheme, the optimized second-order scheme needs four discrete values to calculate a single derivative. However, the optimized scheme is accurate over a wider wavenumber range. Compared to the fourth-order scheme, the optimized scheme imposes stricter limitations on the time stepmore » sizes but allows coarser grids. The net effect is that the optimized scheme is more efficient in terms of computation time and memory requirement than the fourth-order scheme. The temporal derivatives are approximated by second-order central differences throughout. The Liao transmitting boundary conditions are used to truncate an open problem. A reflection coefficient analysis shows that this transmitting boundary condition works very well. However, it is subject to instability. A method that can be easily implemented is proposed to stabilize the boundary condition. The finite-difference solution is compared to closed-form solutions for conducting and nonconducting whole spaces and to an integral-equation solution for a 3-D body in a homogeneous half-space. In all cases, the finite-difference solutions are in good agreement with the other solutions. Finally, the use of the algorithm is demonstrated with a 3-D model. Numerical results show that both the magnetic field response and electric field response can be useful for shallow-depth and small-scale investigations.« less
Chapman, J F; Cook, R
2002-10-01
The Blood Stocks Management Scheme (BSMS) has been established as a joint venture between the National Blood Service (NBS) in England and North Wales and participating hospitals to monitor the blood supply chain. Stock and wastage data are submitted to a web-based data-management system, facilitating continuous and complete red cell data collection and 'real time' data extraction. The data-management system enables peer review of performance in respect of stock holding levels and red cell wastage. The BSMS has developed an innovative web-based data-management system that enables data collection and benchmarking of practice, which should drive changes in stock management practice, therefore optimizing the use of donated blood.
Distributed Systems Technology Survey.
1987-03-01
and prolocols. 2. Hardware Technology Ecnomic factor we a majo reonm for the prolierat of dlstbted systoe. Processors, memory, an magne tc ndoptical...destined messages and pertorn the a pro te forwarding. There gImsno agreement that a ightweight process mechanism is essential to support com- monly used...Xerox PARC environment [311. Shared file servers, discussed below, are essential to the success of such a scheme. 11. ecurlity A distributed
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
Interference due to shared features between action plans is influenced by working memory span.
Fournier, Lisa R; Behmer, Lawrence P; Stubblefield, Alexandra M
2014-12-01
In this study, we examined the interactions between the action plans that we hold in memory and the actions that we carry out, asking whether the interference due to shared features between action plans is due to selection demands imposed on working memory. Individuals with low and high working memory spans learned arbitrary motor actions in response to two different visual events (A and B), presented in a serial order. They planned a response to the first event (A) and while maintaining this action plan in memory they then executed a speeded response to the second event (B). Afterward, they executed the action plan for the first event (A) maintained in memory. Speeded responses to the second event (B) were delayed when it shared an action feature (feature overlap) with the first event (A), relative to when it did not (no feature overlap). The size of the feature-overlap delay was greater for low-span than for high-span participants. This indicates that interference due to overlapping action plans is greater when fewer working memory resources are available, suggesting that this interference is due to selection demands imposed on working memory. Thus, working memory plays an important role in managing current and upcoming action plans, at least for newly learned tasks. Also, managing multiple action plans is compromised in individuals who have low versus high working memory spans.
NASA Technical Reports Server (NTRS)
Baldwin, Rod
1994-01-01
In recent years there have been immense pressures to enact changes on the air traffic control organizations of most states. In addition, many of these states are or have been subject to great political, sociological and economic changes. Consequently, any new schemes must be considered within the context of national or even international changes. Europe has its own special problems, and many of these are particularly pertinent when considering human factors certification programs. Although these problems must also be considered in the wider context of change, it is usually very difficult to identify which forces are pressing in support of human factors aspects and which forces are resisting change. There are a large number of aspects which must be taken into account if human factors certification programs are to be successfully implemented. Certification programs would be new ventures, and like many new ventures it will be essential to ensure that managers have the skills, commitment and experience to manage the programs effectively. However, they must always be aware of the content and the degree of certainty to which the human factors principles can be applied - as Debons and Horne have carefully described. It will be essential to avoid the well known pitfalls which occur in the implementation of performance appraisal schemes. While most appraisal schemes are usually extremely well thought out, they often do not produce good results because they are not implemented properly and staff therefore do not have faith in them. If the manager does not have the commitment and interest in his/her staff as human beings, then the schemes will not be effective. Thus, one aspect of considering human factors certification schemes is within the context of a managed organization. This paper outlines some of the management factors which need to be considered for the air traffic control services. Many of the points received attention during the plenary sessions while others were covered by the working groups when the question arose of how various aspects of human factors certification programs would be managed. Management and organizational issues will certainly need to be included in any frame of reference by those who may be involved in developing certification programs.
Fast and memory efficient text image compression with JBIG2.
Ye, Yan; Cosman, Pamela
2003-01-01
In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.
An Efficient Scheme for Updating Sparse Cholesky Factors
NASA Technical Reports Server (NTRS)
Raghavan, Padma
2002-01-01
Raghavan had earlier developed the software package DCSPACK which can be used for solving sparse linear systems where the coefficient matrix is symmetric and positive definite (this project was not funded by NASA but by agencies such as NSF). DSCPACK-S is the serial code and DSCPACK-P is a parallel implementation suitable for multiprocessors or networks-of-workstations with message passing using MCI. The main algorithm used is the Cholesky factorization of a sparse symmetric positive positive definite matrix A = LL(T). The code can also compute the factorization A = LDL(T). The complexity of the software arises from several factors relating to the sparsity of the matrix A. A sparse N x N matrix A has typically less that cN nonzeroes where c is a small constant. If the matrix were dense, it would have O(N2) nonzeroes. The most complicated part of such sparse Cholesky factorization relates to fill-in, i.e., zeroes in the original matrix that become nonzeroes in the factor L. An efficient implementation depends to a large extent on complex data structures and on techniques from graph theory to reduce, identify, and manage fill. DSCPACK is based on an efficient multifrontal implementation with fill-managing algorithms and implementation arising from earlier research by Raghavan and others. Sparse Cholesky factorization is typically a four step process: (1) ordering to compute a fill-reducing numbering, (2) symbolic factorization to determine the nonzero structure of L, (3) numeric factorization to compute L, and, (4) triangular solution to solve L(T)x = y and Ly = b. The first two steps are symbolic and are performed using the graph of the matrix. The numeric factorization step is of dominant cost and there are several schemes for improving performance by exploiting the nested and dense structure of groups of columns in the factor. The latter are aimed at better utilization of the cache-memory hierarchy on modem processors to prevent cache-misses and provide execution rates (operations/second) that are close to the peak rates for dense matrix computations. Currently, EPISCOPACY is being used in an application at NASA directed by J. Newman and M. James. We propose the implementation of efficient schemes for updating the LL(T) or LDL(T) factors computed in DSCPACK-S to meet the computational requirements of their project. A brief description is provided in the next section.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.
2009-12-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less
Memory Effects and Nonequilibrium Correlations in the Dynamics of Open Quantum Systems
NASA Astrophysics Data System (ADS)
Morozov, V. G.
2018-01-01
We propose a systematic approach to the dynamics of open quantum systems in the framework of Zubarev's nonequilibrium statistical operator method. The approach is based on the relation between ensemble means of the Hubbard operators and the matrix elements of the reduced statistical operator of an open quantum system. This key relation allows deriving master equations for open systems following a scheme conceptually identical to the scheme used to derive kinetic equations for distribution functions. The advantage of the proposed formalism is that some relevant dynamical correlations between an open system and its environment can be taken into account. To illustrate the method, we derive a non-Markovian master equation containing the contribution of nonequilibrium correlations associated with energy conservation.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Euler, Johannes; Heldt, Sonja
2018-04-15
The European Union Water Framework Directive (EU WFD, 2000) calls for active inclusion of the public in the governance of waterbodies to enhance the effectiveness and legitimacy of water management schemes across the EU. As complex socio-ecological systems, river basins in western Europe could benefit from further support for inclusive management schemes. This paper makes use of case studies from Germany, England and Spain to explore the potential opportunities and challenges of different participatory management approaches. Grounded in theoretical considerations around participation within ecological management schemes, including Arnstein's Ladder of Participation and commons theories, this work provides an evaluation of each case study based on key indicators, such as inclusivity, representativeness, self-organization, decision-making power, spatial fit and temporal continuity. As investors and the public develop a heightened awareness for long-term sustainability of industrial projects, this analysis supports the suggestion that increased participatory river basin management is both desirable and economically feasible, and should thus be considered a viable option for future projects aiming to move beyond current requirements of the European Union Water Framework Directive. Copyright © 2017. Published by Elsevier B.V.
Management initiatives in a community-based health insurance scheme.
Sinha, Tara; Ranson, M Kent; Chatterjee, Mirai; Mills, Anne
2007-01-01
Community-based health insurance (CBHI) schemes have developed in response to inadequacies of alternate systems for protecting the poor against health care expenditures. Some of these schemes have arisen within community-based organizations (CBOs), which have strong links with poor communities, and are therefore well situated to offer CBHI. However, the managerial capacities of many such CBOs are limited. This paper describes management initiatives undertaken in a CBHI scheme in India, in the course of an action-research project. The existing structures and systems at the CBHI had several strengths, but fell short on some counts, which became apparent in the course of planning for two interventions under the research project. Management initiatives were introduced that addressed four features of the CBHI, viz. human resources, organizational structure, implementation systems, and data management. Trained personnel were hired and given clear roles and responsibilities. Lines of reporting and accountability were spelt out, and supportive supervision was provided to team members. The data resources of the organization were strengthened for greater utilization of this information. While the changes that were introduced took some time to be accepted by team members, the commitment of the CBHI's leadership to these initiatives was critical to their success. Copyright (c) 2007 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe Ainsworth
2010-05-01
The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classesmore » is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would significantly improve the robustness of code that uses the memory management classes described here.« less
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Unconditional security of entanglement-based continuous-variable quantum secret sharing
NASA Astrophysics Data System (ADS)
Kogias, Ioannis; Xiang, Yu; He, Qiongyi; Adesso, Gerardo
2017-01-01
The need for secrecy and security is essential in communication. Secret sharing is a conventional protocol to distribute a secret message to a group of parties, who cannot access it individually but need to cooperate in order to decode it. While several variants of this protocol have been investigated, including realizations using quantum systems, the security of quantum secret sharing schemes still remains unproven almost two decades after their original conception. Here we establish an unconditional security proof for entanglement-based continuous-variable quantum secret sharing schemes, in the limit of asymptotic keys and for an arbitrary number of players. We tackle the problem by resorting to the recently developed one-sided device-independent approach to quantum key distribution. We demonstrate theoretically the feasibility of our scheme, which can be implemented by Gaussian states and homodyne measurements, with no need for ideal single-photon sources or quantum memories. Our results contribute to validating quantum secret sharing as a viable primitive for quantum technologies.
NASA Astrophysics Data System (ADS)
Wang, Tie-Jun; Wang, Chuan
2016-01-01
Hyperentangled Bell-state analysis (HBSA) is an essential method in high-capacity quantum communication and quantum information processing. Here by replacing the two-qubit controlled-phase gate with the two-qubit SWAP gate, we propose a scheme to distinguish the 16 hyperentangled Bell states completely in both the polarization and the spatial-mode degrees of freedom (DOFs) of two-photon systems. The proposed scheme reduces the use of two-qubit interaction which is fragile and cumbersome, and only one auxiliary particle is required. Meanwhile, it reduces the requirement for initializing the auxiliary particle which works as a temporary quantum memory, and does not have to be actively controlled or measured. Moreover, the state of the auxiliary particle remains unchanged after the HBSA operation, and within the coherence time, the auxiliary particle can be repeatedly used in the next HBSA operation. Therefore, the engineering complexity of our HBSA operation is greatly simplified. Finally, we discuss the feasibility of our scheme with current technologies.
Physiology driven adaptivity for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2007-09-01
Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.
Kinetic memory based on the enzyme-limited competition.
Hatakeyama, Tetsuhiro S; Kaneko, Kunihiko
2014-08-01
Cellular memory, which allows cells to retain information from their environment, is important for a variety of cellular functions, such as adaptation to external stimuli, cell differentiation, and synaptic plasticity. Although posttranslational modifications have received much attention as a source of cellular memory, the mechanisms directing such alterations have not been fully uncovered. It may be possible to embed memory in multiple stable states in dynamical systems governing modifications. However, several experiments on modifications of proteins suggest long-term relaxation depending on experienced external conditions, without explicit switches over multi-stable states. As an alternative to a multistability memory scheme, we propose "kinetic memory" for epigenetic cellular memory, in which memory is stored as a slow-relaxation process far from a stable fixed state. Information from previous environmental exposure is retained as the long-term maintenance of a cellular state, rather than switches over fixed states. To demonstrate this kinetic memory, we study several models in which multimeric proteins undergo catalytic modifications (e.g., phosphorylation and methylation), and find that a slow relaxation process of the modification state, logarithmic in time, appears when the concentration of a catalyst (enzyme) involved in the modification reactions is lower than that of the substrates. Sharp transitions from a normal fast-relaxation phase into this slow-relaxation phase are revealed, and explained by enzyme-limited competition among modification reactions. The slow-relaxation process is confirmed by simulations of several models of catalytic reactions of protein modifications, and it enables the memorization of external stimuli, as its time course depends crucially on the history of the stimuli. This kinetic memory provides novel insight into a broad class of cellular memory and functions. In particular, applications for long-term potentiation are discussed, including dynamic modifications of calcium-calmodulin kinase II and cAMP-response element-binding protein essential for synaptic plasticity.
Vierck, Esther; Joyce, Peter R
2015-10-30
A majority of bipolar patients (BD) show functional difficulties even in remission. In recent years cognitive functions and personality characteristics have been associated with occupational and psychosocial outcomes, but findings are not consistent. We assessed personality and cognitive functioning through a range of tests in BD and control participants. Three cognitive domains-verbal memory, facial-executive, and spatial memory-were extracted by principal component analysis. These factors and selected personality dimensions were included in hierarchical regression analysis to predict psychosocial functioning and the use of self-management strategies while controlling for mood status. The best determinants of good psychosocial functioning were good verbal memory and high self-directedness. The use of self-management techniques was associated with a low level of harm-avoidance. Our findings indicate that strategies to improve memory and self-directedness may be useful for increasing functioning in individuals with bipolar disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Adaptively loaded SP-offset-QAM OFDM for IM/DD communication systems.
Zhao, Jian; Chan, Chun-Kit
2017-09-04
In this paper, we propose adaptively loaded set-partitioned offset quadrature amplitude modulation (SP-offset-QAM) orthogonal frequency division multiplexing (OFDM) for low-cost intensity-modulation direct-detection (IM/DD) communication systems. We compare this scheme with multi-band carrier-less amplitude phase modulation (CAP) and conventional OFDM, and demonstrate >40 Gbit/s transmission over 50-km single-mode fiber. It is shown that the use of SP-QAM formats, together with the adaptive loading algorithm specifically designed to this group of formats, results in significant performance improvement for all these three schemes. SP-offset-QAM OFDM exhibits greatly reduced complexity compared to SP-QAM based multi-band CAP, via parallelized implementation and minimized memory length for spectral shaping. On the other hand, this scheme shows better performance than SP-QAM based conventional OFDM at both back-to-back and after transmission. We also characterize the proposed scheme in terms of enhanced tolerance to fiber intra-channel nonlinearity and the potential to increase the communication security. The studies show that adaptive SP-offset-QAM OFDM is a promising IM/DD solution for medium- and long-reach optical access networks and data center connections.
Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes
NASA Technical Reports Server (NTRS)
Wissink, Andrew; Allen, Edwin (Technical Monitor)
1998-01-01
Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.
An efficient blocking M2L translation for low-frequency fast multipole method in three dimensions
NASA Astrophysics Data System (ADS)
Takahashi, Toru; Shimba, Yuta; Isakari, Hiroshi; Matsumoto, Toshiro
2016-05-01
We propose an efficient scheme to perform the multipole-to-local (M2L) translation in the three-dimensional low-frequency fast multipole method (LFFMM). Our strategy is to combine a group of matrix-vector products associated with M2L translation into a matrix-matrix product in order to diminish the memory traffic. For this purpose, we first developed a grouping method (termed as internal blocking) based on the congruent transformations (rotational and reflectional symmetries) of M2L-translators for each target box in the FMM hierarchy (adaptive octree). Next, we considered another method of grouping (termed as external blocking) that was able to handle M2L translations for multiple target boxes collectively by using the translational invariance of the M2L translation. By combining these internal and external blockings, the M2L translation can be performed efficiently whilst preservingthe numerical accuracy exactly. We assessed the proposed blocking scheme numerically and applied it to the boundary integral equation method to solve electromagnetic scattering problems for perfectly electrical conductor. From the numerical results, it was found that the proposed M2L scheme achieved a few times speedup compared to the non-blocking scheme.
Water resources management in karst aquifers - concepts and modeling approaches
NASA Astrophysics Data System (ADS)
Sauter, M.; Schmidt, S.; Abusaada, M.; Reimann, T.; Liedl, R.; Kordilla, J.; Geyer, T.
2011-12-01
Water resources management schemes generally imply the availability of a spectrum of various sources of water with a variability of quantity and quality in space and time, and the availability and suitability of storage facilities to cover various demands of water consumers on quantity and quality. Aquifers are generally regarded as suitable reservoirs since large volumes of water can be stored in the subsurface, water is protected from contamination and evaporation and the underground passage assists in the removal of at least some groundwater contaminants. Favorable aquifer properties include high vertical hydraulic conductivities for infiltration, large storage coefficients and not too large hydraulic gradients / conductivities. The latter factors determine the degree of discharge, i.e. loss of groundwater. Considering the above criteria, fractured and karstified aquifers appear to not really fulfill the respective conditions for storage reservoirs. Although infiltration capacity is relatively high, due to low storativity and high hydraulic conductivities, the small quantity of water stored is rapidly discharged. However, for a number of specific conditions, even karst aquifers are suitable for groundwater management schemes. They can be subdivided into active and passive management strategies. Active management options include strategies such as overpumping, i.e. the depletion of the karst water resources below the spring outflow level, the construction of subsurface dams to prevent rapid discharge. Passive management options include the optimal use of the discharging groundwater under natural discharge conditions. System models that include the superposition of the effect of the different compartments soil zone, epikarst, vadose and phreatic zone assist in the optimal usage of the available groundwater resources, while taking into account the different water reservoirs. The elaboration and implementation of groundwater protection schemes employing well established vulnerability assessment techniques ascertain the respective groundwater quality. In this paper a systematic overview is provided on karst groundwater management schemes illustrating the specific conditions allowing active or passive management in the first place as well as the employment of various types of adapted models for the design of the different management schemes. Examples are provided from karst systems in Israel/Palestine, where a large 4000sqkm basin is being managed as a whole, the South of France, where the Lez groundwater development scheme illustrates the optimal use of overpumping from the conduit system, providing additional water for the City of Montpellier during dry summers and at the same time increasing recharge and assisting in the mitigation of flooding during high winter discharge conditions. Overpumping could be an option in many Mediterranean karst catchments since karst conduit development occurred well below today's spring discharge level. Other examples include the construction of subsurface dams for hydropower generation in the Dinaric karst and reduction of discharge. Problems of leakage and general feasibility are discussed.
76 FR 24409 - Proposed Amendment of Class E Airspace; Ava, MO
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-02
...) at Bill Martin Memorial Airport, Ava, MO, has made this action necessary for the safety and management of Instrument Flight Rules (IFR) operations at Bill Martin Memorial Airport. DATES: Comments must... from 700 feet above the surface for standard instrument approach procedures at Bill Martin Memorial...
Ding, Xiaoshuai; Cao, Jinde; Zhao, Xuan; Alsaadi, Fuad E
2017-08-01
This paper is concerned with the drive-response synchronization for a class of fractional-order bidirectional associative memory neural networks with time delays, as well as in the presence of discontinuous activation functions. The global existence of solution under the framework of Filippov for such networks is firstly obtained based on the fixed-point theorem for condensing map. Then the state feedback and impulsive controllers are, respectively, designed to ensure the Mittag-Leffler synchronization of these neural networks and two new synchronization criteria are obtained, which are expressed in terms of a fractional comparison principle and Razumikhin techniques. Numerical simulations are presented to validate the proposed methodologies.
Holographic Associative Memory Employing Phase Conjugation
NASA Astrophysics Data System (ADS)
Soffer, B. H.; Marom, E.; Owechko, Y.; Dunning, G.
1986-12-01
The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,8'8' are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.
A Succinct Naming Convention for Lengthy Hexadecimal Numbers
NASA Technical Reports Server (NTRS)
Grant, Michael S.
1997-01-01
Engineers, computer scientists, mathematicians and others must often deal with lengthy hexadecimal numbers. As memory requirements for software increase, the associated memory address space for systems necessitates the use of longer and longer strings of hexadecimal characters to describe a given number. For example, the address space of some digital signal processors (DSP's) now ranges in the billions of words, requiring eight hexadecimal characters for many of the addresses. This technical memorandum proposes a simple grouping scheme for more clearly representing lengthy hexadecimal numbers in written material, as well as a "code" for naming and more quickly verbalizing such numbers. This should facilitate communications among colleagues in engineering and related fields, and aid in comprehension and temporary memorization of important hexadecimal numbers during design work.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
NASA Astrophysics Data System (ADS)
Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching
2000-10-01
We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.
Rambrain - a library for virtually extending physical memory
NASA Astrophysics Data System (ADS)
Imgrund, Maximilian; Arth, Alexander
2017-08-01
We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.
2015-03-01
2.5.5 Availability Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.6 Simulation Environments...routing scheme can prove problematic. Two prominent proactive protocols, 7 Destination-Sequenced Distance-Vector (DSDV) and Optimized Link State...distributed file management systems such as Tahoe- LAFS as part of its replication scheme . Altman and De Pellegrini [4] examine the impact of FEC and
Signature scheme based on bilinear pairs
NASA Astrophysics Data System (ADS)
Tong, Rui Y.; Geng, Yong J.
2013-03-01
An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.
NASA Astrophysics Data System (ADS)
Ani, Adi Irfan Che; Sairi, Ahmad; Tawil, Norngainy Mohd; Wahab, Siti Rashidah Hanum Abd; Razak, Muhd Zulhanif Abd
2016-08-01
High demand for housing and limited land in town area has increasing the provision of high-rise residential scheme. This type of housing has different owners but share the same land lot and common facilities. Thus, maintenance works of the buildings and common facilities must be well organized. The purpose of this paper is to identify and classify basic facilities for high-rise residential building hoping to improve the management of the scheme. The method adopted is a survey on 100 high-rise residential schemes that ranged from affordable housing to high cost housing by using a snowball sampling. The scope of this research is within Kajang area, which is rapidly developed with high-rise housing. The objective of the survey is to list out all facilities in every sample of the schemes. The result confirmed that pre-determined 11 classifications hold true and can provide the realistic classification for high-rise residential scheme. This paper proposed for redefinition of facilities provided to create a better management system and give a clear definition on the type of high-rise residential based on its facilities.
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-06-27
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead.
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-01-01
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead. PMID:23807687
Formal verification of a set of memory management units
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.
1992-01-01
This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.
Optical mass memory system (AMM-13). AMM/DBMS interface control document
NASA Technical Reports Server (NTRS)
Bailey, G. A.
1980-01-01
The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.
Memory management and compiler support for rapid recovery from failures in computer systems
NASA Technical Reports Server (NTRS)
Fuchs, W. K.
1991-01-01
This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.
Practical proof of CP element based design for 14nm node and beyond
NASA Astrophysics Data System (ADS)
Maruyama, Takashi; Takita, Hiroshi; Ikeno, Rimon; Osawa, Morimi; Kojima, Yoshinori; Sugatani, Shinji; Hoshino, Hiromi; Hino, Toshio; Ito, Masaru; Iizuka, Tetsuya; Komatsu, Satoshi; Ikeda, Makoto; Asada, Kunihiro
2013-03-01
To realize HVM (High Volume Manufacturing) with CP (Character Projection) based EBDW, the shot count reduction is the essential key. All device circuits should be composed with predefined character parts and we call this methodology "CP element based design". In our previous work, we presented following three concepts [2]. 1) Memory: We reported the prospects of affordability for the CP-stencil resource. 2) Logic cell: We adopted a multi-cell clustering approach in the physical synthesis. 3) Random interconnect: We proposed an ultra-regular layout scheme using fixed size wiring tiles containing repeated tracks and cutting points at the tile edges. In this paper, we will report the experimental proofs in these methodologies. In full chip layout, CP stencil resource management is critical key. From the MCC-POC (Proof of Concept) result [1], we assumed total available CP stencil resource as 9000um2. We should manage to layout all circuit macros within this restriction. Especially the issues in assignment of CP-stencil resource for the memory macros are the most important as they consume considerable degree of resource because of the various line-ups such as 1RW-, 2RW-SRAMs, Resister Files and ROM which require several varieties of large size peripheral circuits. Furthermore the memory macros typically take large area of more than 40% of die area in the forefront logic LSI products so that the shot count increase impact is serious. To realize CP-stencil resource saving we had constructed automatic CP analyzing system. We developed two types of extraction mode of simple division by block and layout repeatability recognition. By properly controlling these models based upon each peripheral circuit characteristics, we could minimize the consumption of CP stencil resources. The estimation for 14nm technology node had been performed based on the analysis of practical memory compiler. The required resource for memory macro is proved to be affordable value which is 60% of full CP stencil resource and wafer level converted shot count is proved to be the level which meets 100WPH throughput. In logic cell design, circuit performance verification result after the cell clustering has been estimated. The cell clustering by the acknowledgment of physical distance proved to owe large penalty mainly in the wiring length. To reduce this design penalty, we proposed CP cell clustering by the acknowledgment of logical distance. For shot-count reduction of random interconnect area design, we proposed a more structural routing architecture which consists of the track exchange and the via position arrangement. Putting these design approaches together, we can design CP stencils to hit the target throughput within the area constraint. From the analysis for other macros such as analog, I/O, and DUMMY, it has proved that we don't need special CP design approach than legacy pattern matching CP extraction. From all these experimental results we get good prospects to the reality of full CP element based layout.
Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L
2001-03-01
A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.
Command and Control Software Development Memory Management
NASA Technical Reports Server (NTRS)
Joseph, Austin Pope
2017-01-01
This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.
Tian, Long; Xu, Zhongxiao; Chen, Lirong; Ge, Wei; Yuan, Haoxiang; Wen, Yafei; Wang, Shengzhi; Li, Shujing; Wang, Hai
2017-09-29
The light-matter quantum interface that can create quantum correlations or entanglement between a photon and one atomic collective excitation is a fundamental building block for a quantum repeater. The intrinsic limit is that the probability of preparing such nonclassical atom-photon correlations has to be kept low in order to suppress multiexcitation. To enhance this probability without introducing multiexcitation errors, a promising scheme is to apply multimode memories to the interface. Significant progress has been made in temporal, spectral, and spatial multiplexing memories, but the enhanced probability for generating the entangled atom-photon pair has not been experimentally realized. Here, by using six spin-wave-photon entanglement sources, a switching network, and feedforward control, we build a multiplexed light-matter interface and then demonstrate a ∼sixfold (∼fourfold) probability increase in generating entangled atom-photon (photon-photon) pairs. The measured compositive Bell parameter for the multiplexed interface is 2.49±0.03 combined with a memory lifetime of up to ∼51 μs.
Memory-efficient RNA energy landscape exploration
Mann, Martin; Kucharík, Marcel; Flamm, Christoph; Wolfinger, Michael T.
2014-01-01
Motivation: Energy landscapes provide a valuable means for studying the folding dynamics of short RNA molecules in detail by modeling all possible structures and their transitions. Higher abstraction levels based on a macro-state decomposition of the landscape enable the study of larger systems; however, they are still restricted by huge memory requirements of exact approaches. Results: We present a highly parallelizable local enumeration scheme that enables the computation of exact macro-state transition models with highly reduced memory requirements. The approach is evaluated on RNA secondary structure landscapes using a gradient basin definition for macro-states. Furthermore, we demonstrate the need for exact transition models by comparing two barrier-based approaches, and perform a detailed investigation of gradient basins in RNA energy landscapes. Availability and implementation: Source code is part of the C++ Energy Landscape Library available at http://www.bioinf.uni-freiburg.de/Software/. Contact: mmann@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24833804
Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.
Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano
2014-09-09
A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.
Parallel Navier-Stokes computations on shared and distributed memory architectures
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Jayasimha, D. N.; Pillay, Sasi Kumar
1995-01-01
We study a high order finite difference scheme to solve the time accurate flow field of a jet using the compressible Navier-Stokes equations. As part of our ongoing efforts, we have implemented our numerical model on three parallel computing platforms to study the computational, communication, and scalability characteristics. The platforms chosen for this study are a cluster of workstations connected through fast networks (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and a distributed memory multiprocessor (the IBM SPI). Our focus in this study is on the LACE testbed. We present some results for the Cray YMP and the IBM SP1 mainly for comparison purposes. On the LACE testbed, we study: (1) the communication characteristics of Ethernet, FDDI, and the ALLNODE networks and (2) the overheads induced by the PVM message passing library used for parallelizing the application. We demonstrate that clustering of workstations is effective and has the potential to be computationally competitive with supercomputers at a fraction of the cost.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
NASA Astrophysics Data System (ADS)
Yoon, Bongno; Sung, Man Young; Yeon, Sujin; Oh, Hyun S.; Kwon, Yoonjoo; Kim, Chuljin; Kim, Kyung-Ho
2009-03-01
With the circuits using metal-ferroelectric-metal (MFM) capacitor, rf operational signal properties are almost the same or superior to those of polysilicon-insulator-polysilicon, metal-insulator-metal, and metal-oxide-semiconductor (MOS) capacitors. In electronic product code global class-1 generation-2 uhf radio-frequency identification (RFID) protocols, the MFM can play a crucial role in satisfying the specifications of the inventoried flag's persistence times (Tpt) for each session (S0-S3, SL). In this paper, we propose and design a new MFM capacitor based memory scheme of which persistence time for S1 flag is measured at 2.2 s as well as indefinite for S2, S3, and SL flags during the period of power-on. A ferroelectric random access memory embedded RFID tag chip is fabricated with an industry-standard complementary MOS process. The chip size is around 500×500 μm2 and the measured power consumption is about 10 μW.
Orbiter/Space lab momentum management for POP orientations
NASA Technical Reports Server (NTRS)
Cox, J. W.
1974-01-01
An angular momentum management scheme applicable to the orbiter/spacelab is described. The basis of the scheme is to periodically maneuver the vehicle through a small angle thereby using the gravity gradient torque to dump momentum from the control moment gyro (CMG) control system. The orbiter is operated with its principal vehicle axis perpendicular to the orbital plane. Numerous case runs were conducted on the hybrid simulation and representative cases are included.
Wireless Instrumentation System and Power Management Scheme Therefore
NASA Technical Reports Server (NTRS)
Perotti, Jose (Inventor); Lucena, Angel (Inventor); Eckhoff, Anthony (Inventor); Mata, Carlos T. (Inventor); Blalock, Norman N. (Inventor); Medelius, Pedro J. (Inventor)
2007-01-01
A wireless instrumentation system enables a plurality of low power wireless transceivers to transmit measurement data from a plurality of remote station sensors to a central data station accurately and reliably. The system employs a relay based communications scheme where remote stations that cannot communicate directly with the central station due to interference, poor signal strength, etc., are instructed to communicate with other of the remote stations that act as relays to the central station. A unique power management scheme is also employed to minimize power usage at each remote station and thereby maximize battery life. Each of the remote stations prefembly employs a modular design to facilitate easy reconfiguration of the stations as required.
NASA Astrophysics Data System (ADS)
Chow, Sherman S. M.
Traceable signature scheme extends a group signature scheme with an enhanced anonymity management mechanism. The group manager can compute a tracing trapdoor which enables anyone to test if a signature is signed by a given misbehaving user, while the only way to do so for group signatures requires revealing the signer of all signatures. Nevertheless, it is not tracing in a strict sense. For all existing schemes, T tracing agents need to recollect all N' signatures ever produced and perform RN' “checks” for R revoked users. This involves a high volume of transfer and computations. Increasing T increases the degree of parallelism for tracing but also the probability of “missing” some signatures in case some of the agents are dishonest.
Multivariate quantum memory as controllable delayed multi-port beamsplitter
NASA Astrophysics Data System (ADS)
Vetlugin, A. N.; Sokolov, I. V.
2016-03-01
The addressability of parallel spatially multimode quantum memory for light allows one to control independent collective spin waves within the same cold atomic ensemble. Generally speaking, there are transverse and longitudinal degrees of freedom of the memory that one can address by a proper choice of the pump (control) field spatial pattern. Here we concentrate on the mutual evolution and transformation of quantum states of the longitudinal modes of collective spin coherence in the cavity-based memory scheme. We assume that these modes are coherently controlled by the pump waves of the on-demand transverse profile, that is, by the superpositions of waves propagating in the directions close to orthogonal to the cavity axis. By the write-in, this allows one to couple a time sequence of the incoming quantized signals to a given set of superpositions of orthogonal spin waves. By the readout, one can retrieve quantum states of the collective spin waves that are controllable superpositions of the initial ones and are coupled on demand to the output signal sequence. In a general case, the memory is able to operate as a controllable delayed multi-port beamsplitter, capable of transformation of the delays, the durations and time shapes of signals in the sequence. We elaborate the theory of such light-matter interface for the spatially multivariate cavity-based off-resonant Raman-type quantum memory. Since, in order to speed up the manipulation of complex signals in multivariate memories, it might be of interest to store relatively short light pulses of a given time shape, we also address some issues of the cavity-based memory operation beyond the bad cavity limit.
Interaction with Machine Improvisation
NASA Astrophysics Data System (ADS)
Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo
We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.
1998-01-01
A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.
He, Ziyang; Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan
2018-04-17
By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.
LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices
Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan
2018-01-01
By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices. PMID:29673171
Similar and yet so different: cash-for-care in six European countries' long-term care policies.
Da Roit, Barbara; Le Bihan, Blanche
2010-09-01
In response to increasing care needs, the reform or development of long-term care (LTC) systems has become a prominent policy issue in all European countries. Cash-for-care schemes-allowances instead of services provided to dependents-represent a key policy aimed at ensuring choice, fostering family care, developing care markets, and containing costs. A detailed analysis of policy documents and regulations, together with a systematic review of existing studies, was used to investigate the differences among six European countries (Austria, France, Germany, Italy, the Netherlands, and Sweden). The rationale and evolution of their various cash-for-care schemes within the framework of their LTC systems also were explored. While most of the literature present cash-for-care schemes as a common trend in the reforms that began in the 1990s and often treat them separately from the overarching LTC policies, this article argues that the policy context, timing, and specific regulation of the new schemes have created different visions of care and care work that in turn have given rise to distinct LTC configurations. A new typology of long-term care configurations is proposed based on the inclusiveness of the system, the role of cash-for-care schemes and their specific regulations, as well as the views of informal care and the care work that they require. © 2010 Milbank Memorial Fund. Published by Wiley Periodicals Inc.
Sharma, Brij Mohan; Bharat, Girija K; Tayal, Shresth; Nizzetto, Luca; Larssen, Thorjørn
2014-08-15
India's rapid agro-economic growth has resulted into many environmental issues, especially related to chemical pollution. Environmental management and control of toxic chemicals have gained significant attention from policy makers, researchers, and enterprises in India. The present study reviews the policy and legal and non-regulatory schemes set in place in this country during the last decades to manage chemical risk and compares them with those in developed nations. India has a large and fragmented body of regulation to control and manage chemical pollution which appears to be ineffective in protecting environment and human health. The example of POPs contamination in India is proposed to support such a theory. Overlapping of jurisdictions and retrospectively approached environmental policy and risk management currently adopted in India are out of date and excluding Indian economy from the process of building and participating into new, environmentally-sustainable market spaces for chemical products. To address these issues, the introduction of a new integrated and scientifically-informed regulation and management scheme is recommended. Such scheme should acknowledge the principle of risk management rather than the current one based on risk acceptance. To this end, India should take advantage of the experience of recently introduced chemical management regulation in some developed nations. Copyright © 2014 Elsevier B.V. All rights reserved.
A novel quantum group signature scheme without using entangled states
NASA Astrophysics Data System (ADS)
Xu, Guang-Bao; Zhang, Ke-Jia
2015-07-01
In this paper, we propose a novel quantum group signature scheme. It can make the signer sign a message on behalf of the group without the help of group manager (the arbitrator), which is different from the previous schemes. In addition, a signature can be verified again when its signer disavows she has ever generated it. We analyze the validity and the security of the proposed signature scheme. Moreover, we discuss the advantages and the disadvantages of the new scheme and the existing ones. The results show that our scheme satisfies all the characteristics of a group signature and has more advantages than the previous ones. Like its classic counterpart, our scheme can be used in many application scenarios, such as e-government and e-business.
How Attention Can Create Synaptic Tags for the Learning of Working Memories in Sequential Tasks
Rombouts, Jaldert O.; Bohte, Sander M.; Roelfsema, Pieter R.
2015-01-01
Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions. PMID:25742003
Coding and decoding with dendrites.
Papoutsi, Athanasia; Kastellakis, George; Psarrou, Maria; Anastasakis, Stelios; Poirazi, Panayiota
2014-02-01
Since the discovery of complex, voltage dependent mechanisms in the dendrites of multiple neuron types, great effort has been devoted in search of a direct link between dendritic properties and specific neuronal functions. Over the last few years, new experimental techniques have allowed the visualization and probing of dendritic anatomy, plasticity and integrative schemes with unprecedented detail. This vast amount of information has caused a paradigm shift in the study of memory, one of the most important pursuits in Neuroscience, and calls for the development of novel theories and models that will unify the available data according to some basic principles. Traditional models of memory considered neural cells as the fundamental processing units in the brain. Recent studies however are proposing new theories in which memory is not only formed by modifying the synaptic connections between neurons, but also by modifications of intrinsic and anatomical dendritic properties as well as fine tuning of the wiring diagram. In this review paper we present previous studies along with recent findings from our group that support a key role of dendrites in information processing, including the encoding and decoding of new memories, both at the single cell and the network level. Copyright © 2013 Elsevier Ltd. All rights reserved.
Design and implementation of low complexity wake-up receiver for underwater acoustic sensor networks
NASA Astrophysics Data System (ADS)
Yue, Ming
This thesis designs a low-complexity dual Pseudorandom Noise (PN) scheme for identity (ID) detection and coarse frame synchronization. The two PN sequences for a node are identical and are separated by a specified length of gap which serves as the ID of different sensor nodes. The dual PN sequences are short in length but are capable of combating severe underwater acoustic (UWA) multipath fading channels that exhibit time varying impulse responses up to 100 taps. The receiver ID detection is implemented on a microcontroller MSP430F5529 by calculating the correlation between the two segments of the PN sequence with the specified separation gap. When the gap length is matched, the correlator outputs a peak which triggers the wake-up enable. The time index of the correlator peak is used as the coarse synchronization of the data frame. The correlator is implemented by an iterative algorithm that uses only one multiplication and two additions for each sample input regardless of the length of the PN sequence, thus achieving low computational complexity. The real-time processing requirement is also met via direct memory access (DMA) and two circular buffers to accelerate data transfer between the peripherals and the memory. The proposed dual PN detection scheme has been successfully tested by simulated fading channels and real-world measured channels. The results show that, in long multipath channels with more than 60 taps, the proposed scheme achieves high detection rate and low false alarm rate using maximal-length sequences as short as 31 bits to 127 bits, therefore it is suitable as a low-power wake-up receiver. The future research will integrate the wake-up receiver with Digital Signal Processors (DSP) for payload detection.
The 2008-2012 French Alzheimer plan: description of the national Alzheimer information system.
Le Duff, Franck; Develay, Aude Emmanuelle; Quetel, Julien; Lafay, Pierre; Schück, Stéphane; Pradier, Christian; Robert, Philippe
2012-01-01
In France, one of the aims of the current national Alzheimer's disease plan is to collect data from all memory centers (memory units, memory resource and research centers, independent neurologists) throughout the country. Here we describe the French Alzheimer Information System and present a 'snapshot' of the data collected throughout the country during the first year of operation. We analyzed all data transmitted by memory centers between January 2010 and December 2010. Each participating center is required to transmit information on patients to the French National Alzheimer dataBank (BNA). This involves completing a computer file containing 31 variables corresponding to a limited data set on AD (CIMA: Corpus minimum d'information Alzheimer). In 2010, the BNA received data from 320 memory centers relating to 199,113 consultations involving 118,776 patients. An analysis of the data shows that the initial MMSE (Mini Mental State Examination) mean score for patients in France was 16.8 points for Alzheimer's disease, 25.7 points for mild cognitive impairment, and 18.8 points for 'related disorders related disorders. The BNA will provide longitudinal data that can be used to assess the needs of individual local health areas and size specialized care provision in each regional health scheme. By contributing to the BNA, the memory centers enhance their clinical activity and help to advance knowledge in epidemiology and medical research in the important field of Alzheimer's disease and related dementias.